Search Results

Search found 14624 results on 585 pages for 'static'.

Page 509/585 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • How to check if a cdrom is in the tray remotely (via ssh)?

    - by adempewolff
    I have a server running Ubuntu 10.04 (it's on the other side of the world and I haven't built up the wherewithal to upgrade it remotely yet) and I have been told that there is a CD in one of it's two CD drives. I want to rip an image of the cd and then download it to my local computer (I don't need help with either of these steps). However, I cannot seem to confirm whether or not there actually is a CD in the drive as I was told. It did not automatically mount anywhere (which I'm thinking might just be a result of it being a headless server not running X, nautilus, or any of the other nice user friendly things). There are two CD drives connected via SCSI: austin@austinvpn:/proc/scsi$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: WDC WD400EB-75CP Rev: 06.0 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: Lite-On Model: LTN486S 48x Max Rev: YDS6 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 01 Lun: 00 Vendor: SAMSUNG Model: CD-R/RW SW-248F Rev: R602 Type: CD-ROM ANSI SCSI revision: 05 However when I try mounting either of these devices (and every other device that could possibly be the cd-drive), it says no medium found: austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd1 /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/scd0 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom /cdrom mount: no medium found on /dev/sr1 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrom1 /cdrom mount: no medium found on /dev/sr0 austin@austinvpn:/proc/scsi$ sudo mount -t iso9660 /dev/cdrw /cdrom mount: no medium found on /dev/sr1 Here are the contents of my /dev folder: austin@austinvpn:/proc/scsi$ ls /dev agpgart loop6 ram6 tty10 tty38 tty8 austinvpn loop7 ram7 tty11 tty39 tty9 block lp0 ram8 tty12 tty4 ttyS0 bsg mapper ram9 tty13 tty40 ttyS1 btrfs-control mcelog random tty14 tty41 ttyS2 bus mem rfkill tty15 tty42 ttyS3 cdrom net root tty16 tty43 urandom cdrom1 network_latency rtc tty17 tty44 usbmon0 cdrw network_throughput rtc0 tty18 tty45 usbmon1 char null scd0 tty19 tty46 usbmon2 console oldmem scd1 tty2 tty47 usbmon3 core parport0 sda tty20 tty48 usbmon4 cpu_dma_latency pktcdvd sda1 tty21 tty49 vcs disk port sda2 tty22 tty5 vcs1 dri ppp sda5 tty23 tty50 vcs2 ecryptfs psaux sg0 tty24 tty51 vcs3 fb0 ptmx sg1 tty25 tty52 vcs4 fd pts sg2 tty26 tty53 vcs5 full ram0 shm tty27 tty54 vcs6 fuse ram1 snapshot tty28 tty55 vcs7 hpet ram10 snd tty29 tty56 vcsa input ram11 sndstat tty3 tty57 vcsa1 kmsg ram12 sr0 tty30 tty58 vcsa2 log ram13 sr1 tty31 tty59 vcsa3 loop0 ram14 stderr tty32 tty6 vcsa4 loop1 ram15 stdin tty33 tty60 vcsa5 loop2 ram2 stdout tty34 tty61 vcsa6 loop3 ram3 tty tty35 tty62 vcsa7 loop4 ram4 tty0 tty36 tty63 vga_arbiter loop5 ram5 tty1 tty37 tty7 zero And here is my fstab file: austin@austinvpn:/proc/scsi$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/austinvpn-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=ed5520ae-c690-4ce6-881e-3598f299be06 /boot ext2 defaults 0 2 /dev/mapper/austinvpn-swap_1 none swap sw 0 0 Am I missing something/doing something wrong, or is there just no CD in the drive or is the drive possibly broken? Is there any nice command to list devices with mountable media? Thanks in advance for any help!

    Read the article

  • Thread safe double buffering

    - by kdavis8
    I am trying to implement a draw map method that will draw the tiled image across the surface of the component. I'm having issue with this code. The double buffering does not seem to be working, because the sprite flickers like crazy; my source code: package myPackage; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; import java.awt.image.BufferStrategy; import java.awt.image.BufferedImage; import javax.swing.JFrame; public class GameView extends JFrame implements Runnable { public BufferedImage backbuffer; public Graphics2D g2d; public Image img; Thread gameloop; Scene scene; public GameView() { super("Game View"); setSize(600, 600); setVisible(true); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); backbuffer = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_RGB); g2d = backbuffer.createGraphics(); Toolkit tk = Toolkit.getDefaultToolkit(); img = tk.getImage(this.getClass().getResource("cage.png")); scene = new Scene(g2d, this); gameloop = new Thread(this); gameloop.start(); } public static void main(String args[]) { new GameView(); } public void paint(Graphics g) { g.drawImage(backbuffer, 0, 0, this); repaint(); } @Override public void run() { // TODO Auto-generated method stub Thread t = Thread.currentThread(); while (t == gameloop) { scene.getScene("dirtmap"); g2d.drawImage(img, 80, 80,this![enter image description here][1]); } } private void drawScene(String string) { // TODO Auto-generated method stub // g2d.setColor(Color.white); // g2d.fillRect(0, 0, getWidth(), getHeight()); scene.getScene(string); } } package myPackage; import java.awt.Color; import java.awt.Component; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.Image; import java.awt.Toolkit; public class Scene { Graphics g2d; Component c; boolean loaded = false; public Scene(Graphics2D gr, Component co) { g2d = gr; c = co; } public void getScene(String mapName) { Toolkit tk = Toolkit.getDefaultToolkit(); Image tile = tk.getImage(this.getClass().getResource("dirt.png")); // g2d.setColor(Color.red); for (int y = 0; y <= 18; y++) { for (int x = 0; x <= 18; x += 1) { g2d.drawImage(tile, x * 32, y * 32, c); } } loaded = true; } }

    Read the article

  • Do we still have a case against the goto statement? [closed]

    - by FredOverflow
    Possible Duplicate: Is it ever worthwhile using goto? In a recent article, Andrew Koenig writes: When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments. He then shows two program fragments, one without a goto and and one with a goto: if (n < 0) n = 0; Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative. Suppose we rewrite this fragment: if (n >= 0) goto nonneg; n = 0; nonneg: ; In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. I emphasized the part that I don't agree with. Modern languages like C++ do not allow goto to transfer control arbitrarily. Here are two examples: You cannot jump to a label that is defined in a different function. You cannot jump over a variable initialization. Now consider composing your code of tiny functions that adhere to the single responsibility principle: int clamp_to_zero(int n) { if (n >= 0) goto n_is_not_negative: n = 0; n_is_not_negative: return n; } The classic argument against the goto statement is that control could have transferred from anywhere inside your program to the label n_is_not_negative, but this simply is not (and was never) true in C++. If you try it, you will get a compiler error, because labels are scoped. The rest of the program doesn't even see the name n_is_not_negative, so it's just not possible to jump there. This is a static guarantee! Now, I'm not saying that this version is better then the one without the goto, but to make the latter as expressive as the first one, we would at least have to insert a comment, or even better yet, an assertion: int clamp_to_zero(int n) { if (n < 0) n = 0; // n is not negative at this point assert(n >= 0); return n; } Note that you basically get the assertion for free in the goto version, because the condition n >= 0 is already written in line 1, and n = 0; satisfies the condition trivially. But that's just a random observation. It seems to me that "don't use gotos!" is one of those dogmas like "don't use multiple returns!" that stem from a time where the real problem were functions of hundreds or even thousand of lines of code. So, do we still have a case against the goto statement, other than that it is not particularly useful? I haven't written a goto in at least a decade, but it's not like I was running away in terror whenever I encountered one. 1 Ideally, I would like to see a strong and valid argument against gotos that still holds when you adhere to established programming principles for clean code like the SRP. "You can jump anywhere" is not (and has never been) a valid argument in C++, and somehow I don't like teaching stuff that is not true. 1: Also, I have never been able to resurrect even a single velociraptor, no matter how many gotos I tried :(

    Read the article

  • A tiny Utility to recycle an IIS Application Pool

    - by Rick Strahl
    In the last few weeks I've annoyingly been having problems with an area on my Web site. It's basically ancient articles that are using ASP classic pages and for reasons unknown ASP classic locks up on these pages frequently. It's not an individual page, but ALL ASP classic pages lock up. Ah yes, gotta old tech gone bad. It's not super critical since the content is really old, but still a hassle since it's linked content that still gets quite a bit of traffic. When it happens all ASP classic in that AppPool dies. I've been having a hard time tracking this one down - I suspect an errant COM object I have a Web Monitor running on the server that's checking for failures and while the monitor can detect the failures when the timeouts occur, I didn't have a good way to just restart that particular application pool. I started putzing around with PowerShell, but - as so often seems the case - I can never get the PowerShell syntax right - I just don't use it enough and have to dig out cheat sheets etc. In any case, after about 20 minutes of that I decided to just create a small .NET Console Application that does the trick instead, and in a few minutes I had this:using System; using System.Collections.Generic; using System.Text; using System.DirectoryServices; namespace RecycleApplicationPool { class Program { static void Main(string[] args) { string appPoolName = "DefaultAppPool"; string machineName = "LOCALHOST"; if (args.Length > 0) appPoolName = args[0]; if (args.Length > 1) machineName = args[1]; string error = null; DirectoryEntry root = null; try { Console.WriteLine("Restarting Application Pool " + appPoolName + " on " + machineName + "..."); root = new DirectoryEntry("IIS://" + machineName + "/W3SVC/AppPools/" +appPoolName); Console.WriteLine(root.InvokeGet("Name")); root.Invoke("Recycle"); Console.WriteLine("Application Pool recycling complete..."); } catch(Exception ex) { error = "Error: Unable to access AppPool: " + ex.Message; } if ( !string.IsNullOrEmpty(error) ) { Console.WriteLine(error); return; } } } } To run in you basically provide the name of the ApplicationPool and optionally a machine name if it's not on the local box. RecyleApplicationPool.exe "WestWindArticles" And off it goes. What's nice about AppPool recycling versus doing a full IISRESET is that it only affects the AppPool, and more importantly AppPool recycles happen in a staggered fashion - the existing instance isn't shut down immediately until requests finish while a new instance is fired up to handle new requests. So, now I can easily plug this Executable into my West Wind Web Monitor as an action to take when the site is not responding or timing out which is a big improvement than hanging for an unspecified amount of time. I'm posting this fairly trivial bit of code just in case somebody (maybe myself a few months down the road) is searching for ApplicationPool recyling code. It's clearly trivial, but I've written batch files for this a bunch of times before and actually having a small utility around without having to worry whether Powershell is installed and configured right is actually an improvement. Next time I think about using PowerShell remind me that it's just easier to just build a small .NET Console app, 'k? :-) Resources Download Executable and VS Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7  .NET  Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • LexisNexis and Oracle Join Forces to Prevent Fraud and Identity Abuse

    - by Tanu Sood
    Author: Mark Karlstrand About the Writer:Mark Karlstrand is a Senior Product Manager at Oracle focused on innovative security for enterprise web and mobile applications. Over the last sixteen years Mark has served as director in a number of tech startups before joining Oracle in 2007. Working with a team of talented architects and engineers Mark developed Oracle Adaptive Access Manager, a best of breed access security solution.The world’s top enterprise software company and the world leader in data driven solutions have teamed up to provide a new integrated security solution to prevent fraud and misuse of identities. LexisNexis Risk Solutions, a Gold level member of Oracle PartnerNetwork (OPN), today announced it has achieved Oracle Validated Integration of its Instant Authenticate product with Oracle Identity Management.Oracle provides the most complete Identity and Access Management platform. The only identity management provider to offer advanced capabilities including device fingerprinting, location intelligence, real-time risk analysis, context-aware authentication and authorization makes the Oracle offering unique in the industry. LexisNexis Risk Solutions provides the industry leading Instant Authenticate dynamic knowledge based authentication (KBA) service which offers customers a secure and cost effective means to authenticate new user or prove authentication for password resets, lockouts and such scenarios. Oracle and LexisNexis now offer an integrated solution that combines the power of the most advanced identity management platform and superior data driven user authentication to stop identity fraud in its tracks and, in turn, offer significant operational cost savings. The solution offers the ability to challenge users with dynamic knowledge based authentication based on the risk of an access request or transaction thereby offering an additional level to other authentication methods such as static challenge questions or one-time password when needed. For example, with Oracle Identity Management self-service, the forgotten password reset workflow utilizes advanced capabilities including device fingerprinting, location intelligence, risk analysis and one-time password (OTP) via short message service (SMS) to secure this sensitive flow. Even when a user has lost or misplaced his/her mobile phone and, therefore, cannot receive the SMS, the new integrated solution eliminates the need to contact the help desk. The Oracle Identity Management platform dynamically switches to use the LexisNexis Instant Authenticate service for authentication if the user is not able to authenticate via OTP. The advanced Oracle and LexisNexis integrated solution, thus, both improves user experience and saves money by avoiding unnecessary help desk calls. Oracle Identity and Access Management secures applications, Juniper SSL VPN and other web resources with a thoroughly modern layered and context-aware platform. Users don't gain access just because they happen to have a valid username and password. An enterprise utilizing the Oracle solution has the ability to predicate access based on the specific context of the current situation. The device, location, temporal data, and any number of other attributes are evaluated in real-time to determine the specific risk at that moment. If the risk is elevated a user can be challenged for additional authentication, refused access or allowed access with limited privileges. The LexisNexis Instant Authenticate dynamic KBA service plugs into the Oracle platform to provide an additional layer of security by validating a user's identity in high risk access or transactions. The large and varied pool of data the LexisNexis solution utilizes to quiz a user makes this challenge mechanism even more robust. This strong combination of Oracle and LexisNexis user authentication capabilities greatly mitigates the risk of exposing sensitive applications and services on the Internet which helps an enterprise grow their business with confidence.Resources:Press release: LexisNexis® Achieves Oracle Validated Integration with Oracle Identity Management Oracle Access Management (HTML)Oracle Adaptive Access Manager (pdf)

    Read the article

  • Inspiring a co-worker to adopt better coding practices?

    - by Aaronaught
    In the Handling my antiquated coworker question, various people discussed strategies for dealing with coworkers who are unwilling to integrate their workflow with the team's. I'd like, if possible, to learn some strategies for "teaching" a coworker who is merely ignorant of modern techniques and tools, and possibly a little apathetic. I've started working with a programmer who until recently has been working in relative isolation, in a different part of the company. He has extensive domain knowledge and most importantly he has demonstrated good problem-solving skills, something which many candidates seem to lack. However, the actual (C#) code I've seen is a throwback to the VB6 days. Procedural structure, Hungarian notation, global variables (abuse of static), no interfaces, no tests, non-use of Generics, throwing System.Exception... you get the idea. This programmer is a fair bit older than I am and, by first impressions at least, doesn't actively seek positive change. I'm not going to say resistant to change, because I think that is largely an issue of how the topic gets broached, and I want to be prepared. Programmers tend to be stubborn people, and going in with guns blazing and instituting rip-it-to-shreds code reviews and strictly-enforced policies is very likely not going to produce the end result that I want. If this were a new hire, a junior programmer, I wouldn't think twice about taking a "mentor" stance, but I'm extremely wary of treating an experienced employee as a clueless newbie (which he's not - he just hasn't kept pace with certain advancements in the field). How might I go about raising this developer's code quality standard the Dale Carnegie way, through gentle persuasion and non-material incentives? What would be the best strategy for effecting subtle, gradual changes, without creating an adversarial situation? Have other people - especially lead developers - been in this type of situation before? Which strategies were successful at stimulating interest and creating a positive group dynamic? Which strategies weren't successful and would be better to avoid? Clarifications: I really feel that several people are answering based on personal feelings without actually reading all of the details of the question. Please note the following, which should have been implied but I am now making explicit: This coworker is only my "senior" by virtue of age. I never said that his title, sphere of influence, or years at the organization exceed mine, and in fact, none of those things are true. He's a LOB programmer who's been absorbed into the main development shop. That's it. I am not a new hire, junior programmer, or other naïve idiot with grand plans to transform the company overnight. I am basically in charge of the software process, but as many who've worked as "leads" will know, responsibilities don't always correlate precisely with the org chart. I'm not asking people how to get my way, come hell or high water. I could do that if I wanted to, with the net result being that this person would become resentful and/or quit. Please try to understand that I am looking for a social, cooperative method of driving change. The mention of "...global variables... no tests... throwing System.Exception" was intended to demonstrate that the problems are not just superficial or aesthetic. Practices that may work for relatively small CRUD apps do not necessarily work for large enterprise apps, and in fact, none of the code so far has actually passed the integration tests. Please, try to take the question at face value, accept that I actually know what I'm talking about, and either answer the question that I actually asked or move on. P.S. My sincerest gratitude to those who -did- offer constructive advice rather than arguing with the premise. I'm going to leave this open for a while longer as I'm hoping to hear more in the way of real-world experiences.

    Read the article

  • Slick2D, Nifty GUI listeners problem

    - by Patokun
    I'm trying to get Nifty GUI to work with Slick2D. So far everything is going great, except that I can't seem to figure out how to properly interact with the GUI. I'm trying the example in the nifty manual http://sourceforge.n....0.pdf/download but it doesn't seem to entirely work. The Element controller is being called for bind(...), init(...) and onStartScreen() as it should, as I can see their println output, but the next() method isn't being called when I click on the GUI element that I assigned the controller to, nor the screen controller as no output from println is shown. What's weird is, that the player is moving, so the mouse input is working. It's supposed to be called when I click the mouse button on it from the in the XML. Here is my code: My Element controller: public class ElementController implements Controller { private Element element; @Override public void bind(Nifty nifty, Screen screen, Element element, Properties parameter, Attributes controlDefinitionAttributes) { this.element = element; System.out.println("bind() called for element: " + element); } @Override public void init(Properties parameter, Attributes controlDefinitionAttributes) { System.out.println("init() called for element: " + element); } @Override public void onStartScreen() { System.out.println("onStartScreen() alled for element: " + element); } @Override public void onFocus(boolean getFocus) { System.out.println("onFocus() called for element: " + element + ", with: " + getFocus); } @Override public boolean inputEvent(NiftyInputEvent inputEvent) { return false; } public void next() { System.out.println("next() clicked for element: " + element); } } MyScreenController: class MyScreenController implements ScreenController { public void bind(Nifty nifty, Screen screen) {} public void onEndScreen() {} public void onStartScreen() {} public void next() { System.out.println("next() called from MyScreenController"); } } And my XML file: <?xml version="1.0" encoding="UTF-8"?> <nifty xmlns="http://nifty-gui.sourceforge.net/nifty-1.3.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://niftygui.sourceforge.net/nifty-1.3.xsd http://nifty-gui.sourceforge.net/nifty-1.3.xsd"> <screen id="start" controller="predaN00b.theThing.V0004.MyScreenController"> <layer childLayout="center" controller="predaN00b.theThing.V0004.ElementController"> <panel width="100px" height="100px" childLayout="vertical" backgroundColor="#ff0f"> <text font="aurulent-sans-16.fnt" color="#ffff" text="Hello World!"> <interact onClick="next()" /> </text> </panel> </layer> </screen> </nifty> My main class, in case it's needed: public class MainGameState extends BasicGame { public Nifty nifty; public MainGame() { super("Test"); } public void init(GameContainer container, StateBasedGame game) throws SlickException { nifty = new Nifty(new SlickRenderDevice(container), new NullSoundDevice(), new PlainSlickInputSystem(), new AccurateTimeProvider()); nifty.addXml("/xml/MainState.xml"); nifty.gotoScreen("start"); } public void update(GameContainer container, StateBasedGame game, int delta) throws SlickException { nifty.update(); } public void render(GameContainer container, StateBasedGame game, Graphics graphics) throws SlickException { nifty.render(false); } public static void main(String[] args) throws SlickException { AppGameContainer app = new AppGameContainer(new MainGame()); app.setAlwaysRender(true); app.setDisplayMode( 1260 , 720, false); //window size app.start(); } }

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Using the latest (stable release) of Oracle Developer Tools for Visual Studio 11.1.0.7.20.

    - by mbcrump
    +  = Simple and safe Data connections.   This guide is for someone wanting to use the latest ODP.NET quickly without reading the official documentation. This guide will get you up and running in about 15 minutes. I have reviewed my referral link to my simple Setting up ODP.net with Win7 x64 and noticed most people were searching for one of the following terms: “how to use odp.net with vs” “setup connection odp.net” “query db using odp and vs” While my article provided links and a sample tnsnames.ora file, it really didn’t tell you how to use it. I’m hoping that this brief tutorial will help. So before we get started, you will need the following: Download the following: www.oracle.com/technology/software/tech/dotnet/utilsoft.html from oracle and install it. It is the first one on the page. Visual Studio 2008 or 2010. It should be noted that The System.Data.OracleClient namespace is the OLD .NET Framework Data Provider for Oracle. It should not be used anymore as it has been depreciated. The latest version which is what we are using is Oracle.DataAccess.Client. First things first, Add a reference to the Oracle.DataAccess.Client after you install ODP.NET   Copy and paste the following C# code into your project and replace the relevant info including the query string and you should be able to return data. I have commented several lines of code to assist in understanding what it is doing.   Lambda Expression. using System; using System.Data; using Oracle.DataAccess.Client;   namespace ConsoleApplication1 {     class Program     {         static void Main(string[] args)         {           try         {             //Setup DataSource             string oradb = "Data Source=(DESCRIPTION ="                                    + "(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = hostname)(PORT = 1521)))"                                    + "(CONNECT_DATA = (SERVICE_NAME = SERVICENAME))) ;"                                    + "Persist Security Info=True;User ID=USER;Password=PASSWORD;";                        //Open Connection to Oracle - this could be moved outside the try.             OracleConnection conn = new OracleConnection(oradb);             conn.Open();               //Create cmd and use parameters to prevent SQL injection attacks.             OracleCommand cmd = new OracleCommand();             cmd.Connection = conn;               cmd.CommandText = "select username from table where username = :username";               OracleParameter p1 = new OracleParameter("username", OracleDbType.Varchar2);             p1.Value = username;             cmd.Parameters.Add(p1);               cmd.CommandType = CommandType.Text;               OracleDataReader dr = cmd.ExecuteReader();             dr.Read();               //Contains the value of the datarow             Console.WriteLine(dr["username"].ToString());               //Disposes of objects.             dr.Dispose();             cmd.Dispose();             conn.Dispose();         }           catch (OracleException ex) // Catches only Oracle errors         {             switch (ex.Number)             {                 case 1:                     Console.WriteLine("Error attempting to insert duplicate data.");                     break;                 case 12545:                     Console.WriteLine("The database is unavailable.");                     break;                 default:                     Console.WriteLine(ex.Message.ToString());                     break;             }         }           catch (Exception ex) // Catches any error not previously caught         {                   Console.WriteLine("Unidentified Error: " + ex.Message.ToString());              }         }       }           } At this point, you should have a working Program that returns data from an oracle database. If you are still having trouble then drop me a line and I will be happy to assist. As of this writing, oracle has announced the latest beta release of ODP.NET 11.2.0.1.1 Beta.  This release includes .NET Framework 4 and .NET Framework 4 Client Profile support. You may want to hold off on this version for a while as its BETA, and I wouldn’t want any production code using any BETA software.

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

  • Abstracting entity caching in XNA

    - by Grofit
    I am in a situation where I am writing a framework in XNA and there will be quite a lot of static (ish) content which wont render that often. Now I am trying to take the same sort of approach I would use when doing non game development, where I don't even think about caching until I have finished my application and realise there is a performance problem and then implement a layer of caching over whatever needs it, but wrap it up so nothing is aware its happening. However in XNA the way we would usually cache would be drawing our objects to a texture and invalidating after a change occurs. So if you assume an interface like so: public interface IGameComponent { void Update(TimeSpan elapsedTime); void Render(GraphicsDevice graphicsDevice); } public class ContainerComponent : IGameComponent { public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it } public void Render(GraphicsDevice graphicsDevice) { foreach(var component in ChildComponents) { // draw every component } } } Then I was under the assumption that we just draw everything directly to the screen, then when performance becomes an issue we just add a new implementation of the above like so: public class CacheableContainerComponent : IGameComponent { private Texture2D cachedOutput; private bool hasChanged; public IList<IGameComponent> ChildComponents { get; private set; } // Assume constructor public void Update(TimeSpan elapsedTime) { // Update anything that needs it // set hasChanged to true if required } public void Render(GraphicsDevice graphicsDevice) { if(hasChanged) { CacheComponents(graphicsDevice); } // Draw cached output } private void CacheComponents(GraphicsDevice graphicsDevice) { // Clean up existing cache if needed var cachedOutput = new RenderTarget2D(...); graphicsDevice.SetRenderTarget(renderTarget); foreach(var component in ChildComponents) { // draw every component } graphicsDevice.SetRenderTarget(null); } } Now in this example you could inherit, but your Update may become a bit tricky then without changing your base class to alert you if you had changed, but it is up to each scenario to choose if its inheritance/implementation or composition. Also the above implementation will re-cache within the rendering cycle, which may cause performance stutters but its just an example of the scenario... Ignoring those facts as you can see that in this example you could use a cache-able component or a non cache-able one, the rest of the framework needs not know. The problem here is that if lets say this component is drawn mid way through the game rendering, other items will already be within the default drawing buffer, so me doing this would discard them, unless I set it to be persisted, which I hear is a big no no on the Xbox. So is there a way to have my cake and eat it here? One simple solution to this is make an ICacheable interface which exposes a cache method, but then to make any use of this interface you would need the rest of the framework to be cache aware, and check if it can cache, and to then do so. Which then means you are polluting and changing your main implementations to account for and deal with this cache... I am also employing Dependency Injection for alot of high level components so these new cache-able objects would be spat out from that, meaning no where in the actual game would they know they are caching... if that makes sense. Just incase anyone asked how I expected to keep it cache aware when I would need to new up a cachable entity.

    Read the article

  • What C++ coding standard do you use?

    - by gablin
    For some time now, I've been unable to settle on a coding standard and use it concistently between projects. When starting a new project, I tend to change some things around (add a space there, remove a space there, add a line break there, an extra indent there, change naming conventions, etc.). So I figured that I might provide a piece of sample code, in C++, and ask you to rewrite it to fit your standard of coding. Inspiration is always good, I say. ^^ So here goes: #ifndef _DERIVED_CLASS_H__ #define _DERIVED_CLASS_H__ /** * This is an example file used for sampling code layout. * * @author Firstname Surname */ #include <stdio> #include <string> #include <list> #include "BaseClass.h" #include "Stuff.h" /** * The DerivedClass is completely useless. It represents uselessness in all its * entirety. */ class DerivedClass : public BaseClass { //////////////////////////////////////////////////////////// // CONSTRUCTORS / DESTRUCTORS //////////////////////////////////////////////////////////// public: /** * Constructs a useless object with default settings. * * @param value * Is never used. * @throws Exception * If something goes awry. */ DerivedClass (const int value) : uselessSize_ (0) {} /** * Constructs a copy of a given useless object. * * @param object * Object to copy. * @throws OutOfMemoryException * If necessary data cannot be allocated. */ ItemList (const DerivedClass& object) {} /** * Destroys this useless object. */ ~ItemList (); //////////////////////////////////////////////////////////// // PUBLIC METHODS //////////////////////////////////////////////////////////// public: /** * Clones a given useless object. * * @param object * Object to copy. * @return This useless object. */ DerivedClass& operator= (const DerivedClass& object) { stuff_ = object.stuff_; uselessSize_ = object.uselessSize_; } /** * Does absolutely nothing. * * @param useless * Pointer to useless data. */ void doNothing (const int* useless) { if (useless == NULL) { return; } else { int womba = *useless; switch (womba) { case 0: cout << "This is output 0"; break; case 1: cout << "This is output 1"; break; case 2: cout << "This is output 2"; break; default: cout << "This is default output"; break; } } } /** * Does even less. */ void doEvenLess () { int mySecret = getSecret (); int gather = 0; for (int i = 0; i < mySecret; i++) { gather += 2; } } //////////////////////////////////////////////////////////// // PRIVATE METHODS //////////////////////////////////////////////////////////// private: /** * Gets the secret value of this useless object. * * @return A secret value. */ int getSecret () const { if ((RANDOM == 42) && (stuff_.size() > 0) || (1000000000000000000 > 0) && true) { return 420; } else if (RANDOM == -1) { return ((5 * 2) + (4 - 1)) / 2; } int timer = 100; bool stopThisMadness = false; while (!stopThisMadness) { do { timer--; } while (timer > 0); stopThisMadness = true; } } //////////////////////////////////////////////////////////// // FIELDS //////////////////////////////////////////////////////////// private: /** * Don't know what this is used for. */ static const int RANDOM = 42; /** * List of lists of stuff. */ std::list <Stuff> stuff_; /** * Specifies the size of this object's uselessness. */ size_t uselessSize_; }; #endif

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • "ERROR:Could not find java.nio.file.Paths" when using Oracle JDK 1.7

    - by Ankit
    I want to try out some features rolled out in Oracle's new JDK 1.7. I followed the post:- Oracle JDK 1.7 but the post doesn't seem to help. I was trying to fetch out the structure for java.nio.file.Paths class file but got the following error:- buffer@ankit:~$ javap java.nio.file.Paths ERROR:Could not find java.nio.file.Paths However i can easily get the information about class structures till JAVA SE 1.6, here is an example:- buffer@ankit:~$ javap java.lang.Object Compiled from "Object.java" public class java.lang.Object{ public java.lang.Object(); public final native java.lang.Class getClass(); public native int hashCode(); public boolean equals(java.lang.Object); protected native java.lang.Object clone() throws java.lang.CloneNotSupportedException; public java.lang.String toString(); public final native void notify(); public final native void notifyAll(); public final native void wait(long) throws java.lang.InterruptedException; public final void wait(long, int) throws java.lang.InterruptedException; public final void wait() throws java.lang.InterruptedException; protected void finalize() throws java.lang.Throwable; static {}; } Running java -version gives the following result:- buffer@ankit:~$ java -version java version "1.7.0_09" Java(TM) SE Runtime Environment (build 1.7.0_09-b05) Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode) SYSTEM INFORMATION buffer@ankit:~$ sudo update-alternatives --config java [sudo] password for buffer: There are 4 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 1051 manual mode 3 /usr/lib/jvm/jdk1.7.0_09/ 1 manual mode * 4 /usr/lib/jvm/jdk1.7.0_09/bin/java 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javac There are 2 choices for the alternative javac (providing /usr/bin/javac). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/bin/javac 1061 manual mode * 2 /usr/lib/jvm/jdk1.7.0_09/bin/javac 1 manual mode buffer@ankit:~$ sudo update-alternatives --config javaws There are 3 choices for the alternative javaws (providing /usr/bin/javaws). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 auto mode 1 /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws 1061 manual mode 2 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws 1060 manual mode * 3 /usr/lib/jvm/jdk1.7.0_09/bin/javaws 1 manual mode The directory structure of /usr/lib/jvm/ is as follows:- buffer@ankit:~$ ls -l /usr/lib/jvm/ total 24 lrwxrwxrwx 1 root root 24 Dec 2 2011 default-java -> java-1.6.0-openjdk-amd64 drwxr-xr-x 4 root root 4096 Nov 8 16:24 java-1.5.0-gcj-4.6 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-1.6.0-openjdk -> java-1.6.0-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 00:01 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64 lrwxrwxrwx 1 root root 20 Oct 25 06:59 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64 lrwxrwxrwx 1 root root 24 Dec 2 2011 java-6-openjdk -> java-1.6.0-openjdk-amd64 drwxr-xr-x 7 root root 4096 Nov 8 16:24 java-6-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 16:24 java-6-openjdk-common drwxr-xr-x 5 root root 4096 Nov 8 05:48 java-7-openjdk-amd64 drwxr-xr-x 3 root root 4096 Nov 8 05:48 java-7-openjdk-common drwxr-xr-x 8 buffer buffer 4096 Sep 25 09:08 jdk1.7.0_09 Any help would be highly appreciated.

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Taking a screenshot from within a Silverlight #WP7 application

    - by Laurent Bugnion
    Often times, you want to take a screenshot of an application’s page. There can be multiple reasons. For instance, you can use this to provide an easy feedback method to beta testers. I find this super invaluable when working on integration of design in an app, and the user can take quick screenshots, attach them to an email and send them to me directly from the Windows Phone device. However, the same mechanism can also be used to provide screenshots are a feature of the app, for example if the user wants to save the current status of his application, etc. Caveats Note the following: The code requires an XNA library to save the picture to the media library. To have this, follow the steps: In your application (or class library), add a reference to Microsoft.Xna.Framework. In your code, add a “using” statement to Microsoft.Xna.Framework.Media. In the Properties folder, open WMAppManifest.xml and add the following capability: ID_CAP_MEDIALIB. The method call will fail with an exception if the device is connected to the Zune application on the PC. To avoid this, either disconnect the device when testing, or end the Zune application on the PC. While the method call will not fail on the emulator, there is no way to access the media library, so it is pretty much useless on this platform. This method only prints Silverlight elements to the output image. Other elements (such as a WebBrowser control’s content for instance) will output a black rectangle. The code public static void SaveToMediaLibrary( FrameworkElement element, string title) { try { var bmp = new WriteableBitmap(element, null); var ms = new MemoryStream(); bmp.SaveJpeg( ms, (int)element.ActualWidth, (int)element.ActualHeight, 0, 100); ms.Seek(0, SeekOrigin.Begin); var lib = new MediaLibrary(); var filePath = string.Format(title + ".jpg"); lib.SavePicture(filePath, ms); MessageBox.Show( "Saved in your media library!", "Done", MessageBoxButton.OK); } catch { MessageBox.Show( "There was an error. Please disconnect your phone from the computer before saving.", "Cannot save", MessageBoxButton.OK); } } This method can save any FrameworkElement. Typically I use it to save a whole page, but you can pass any other element to it. On line 7, we create a new WriteableBitmap. This excellent class can render a visual tree into a bitmap. Note that for even more features, you can use the great WriteableBitmapEx class library (which is open source). On lines 9 to 16, we save the WriteableBitmap to a MemoryStream. The only format supported by default is JPEG, however it is possible to convert to other formats with the ImageTools library (also open source). Lines 18 to 20 save the picture to the Windows Phone device’s media library. Using the image To retrieve the image, simply launch the Pictures library on the phone. The image will be in Saved Pictures. From here, you can share the image (by email, for instance), or synchronize it with the PC using the Zune software. Saving to other platforms It is of course possible to save to other platforms than the media library. For example, you can send the image to a web service, or save it to the isolated storage on the device. To do this, instead of using a MemoryStream, you can use any other stream (such as a web request stream, or a file stream) and save to that instead. Hopefully this code will be helpful to you! Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • An Unusual UpdatePanel

    - by João Angelo
    The code you are about to see was mostly to prove a point, to myself, and probably has limited applicability. Nonetheless, in the remote possibility this is useful to someone here it goes… So this is a control that acts like a normal UpdatePanel where all child controls are registered as postback triggers except for a single control specified by the TriggerControlID property. You could basically achieve the same thing by registering all controls as postback triggers in the regular UpdatePanel. However with this, that process is performed automatically. Finally, here is the code: public sealed class SingleAsyncTriggerUpdatePanel : WebControl, INamingContainer { public string TriggerControlID { get; set; } [TemplateInstance(TemplateInstance.Single)] [PersistenceMode(PersistenceMode.InnerProperty)] public ITemplate ContentTemplate { get; set; } public override ControlCollection Controls { get { this.EnsureChildControls(); return base.Controls; } } protected override void CreateChildControls() { if (string.IsNullOrWhiteSpace(this.TriggerControlID)) throw new InvalidOperationException( "The TriggerControlId property must be set."); this.Controls.Clear(); var updatePanel = new UpdatePanel() { ID = string.Concat(this.ID, "InnerUpdatePanel"), ChildrenAsTriggers = false, UpdateMode = UpdatePanelUpdateMode.Conditional, ContentTemplate = this.ContentTemplate }; updatePanel.Triggers.Add(new SingleControlAsyncUpdatePanelTrigger { ControlID = this.TriggerControlID }); this.Controls.Add(updatePanel); } } internal sealed class SingleControlAsyncUpdatePanelTrigger : UpdatePanelControlTrigger { private Control target; private ScriptManager scriptManager; public Control Target { get { if (this.target == null) { this.target = this.FindTargetControl(true); } return this.target; } } public ScriptManager ScriptManager { get { if (this.scriptManager == null) { var page = base.Owner.Page; if (page != null) { this.scriptManager = ScriptManager.GetCurrent(page); } } return this.scriptManager; } } protected override bool HasTriggered() { string asyncPostBackSourceElementID = this.ScriptManager.AsyncPostBackSourceElementID; if (asyncPostBackSourceElementID == this.Target.UniqueID) return true; return asyncPostBackSourceElementID.StartsWith( string.Concat(this.target.UniqueID, "$"), StringComparison.Ordinal); } protected override void Initialize() { base.Initialize(); foreach (Control control in FlattenControlHierarchy(this.Owner.Controls)) { if (control == this.Target) continue; bool isApplicableControl = false; isApplicableControl |= control is INamingContainer; isApplicableControl |= control is IPostBackDataHandler; isApplicableControl |= control is IPostBackEventHandler; if (isApplicableControl) { this.ScriptManager.RegisterPostBackControl(control); } } } private static IEnumerable<Control> FlattenControlHierarchy( ControlCollection collection) { foreach (Control control in collection) { yield return control; if (control.Controls.Count > 0) { foreach (Control child in FlattenControlHierarchy(control.Controls)) { yield return child; } } } } } You can use it like this, meaning that only the B2 button will trigger an async postback: <cc:SingleAsyncTriggerUpdatePanel ID="Test" runat="server" TriggerControlID="B2"> <ContentTemplate> <asp:Button ID="B1" Text="B1" runat="server" OnClick="Button_Click" /> <asp:Button ID="B2" Text="B2" runat="server" OnClick="Button_Click" /> <asp:Button ID="B3" Text="B3" runat="server" OnClick="Button_Click" /> <asp:Label ID="LInner" Text="LInner" runat="server" /> </ContentTemplate> </cc:SingleAsyncTriggerUpdatePanel>

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Switching my collision detection to array lists caused it to stop working

    - by Charlton Santana
    I have made a collision detection system which worked when I did not use array list and block generation. It is weird why it's not working but here's the code, and if anyone could help I would be very grateful :) The first code if the block generation. private static final List<Block> BLOCKS = new ArrayList<Block>(); Random rnd = new Random(System.currentTimeMillis()); int randomx = 400; int randomy = 400; int blocknum = 100; String Title = "blocktitle" + blocknum; private Block block; public void generateBlocks(){ if(blocknum > 0){ int offset = rnd.nextInt(250) + 100; //500 is the maximum offset, this is a constant randomx += offset;//ofset will be between 100 and 400 int randomyoff = rnd.nextInt(80); //500 is the maximum offset, this is a constant randomy = platformheighttwo - 6 - randomyoff;//ofset will be between 100 and 400 block = new Block(BitmapFactory.decodeResource(getResources(), R.drawable.block2), randomx, randomy); BLOCKS.add(block); blocknum -= 1; } The second is where the collision detection takes place note: the block.draw(canvas); works perfectly. It's the blocks that don't work. for(Block block : BLOCKS) { block.draw(canvas); if (sprite.bottomrx < block.bottomrx && sprite.bottomrx > block.bottomlx && sprite.bottomry < block.bottommy && sprite.bottomry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // bottom left touching block? if (sprite.bottomlx < block.bottomrx && sprite.bottomlx > block.bottomlx && sprite.bottomly < block.bottommy && sprite.bottomly > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } // top right touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } //top left touching block? if (sprite.toprx < block.bottomrx && sprite.toprx > block.bottomlx && sprite.topry < block.bottommy && sprite.topry > block.topry ){ Log.d(TAG, "Collided!!!!!!!!!!!!1"); } } The values eg bottomrx are in the block.java file..

    Read the article

  • Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Enterprise Manager Ops Center 12c now comes out-of-the-box  with the latest release of Oracle Exalogic Elastic Cloud 2.0.1 software. It allows Customer to manage and monitor all components inside the Exalogic rack, including provisioning and management of physical and virtualized server. Ops Center will allow Customers to easily get started with creating and managing Private Clouds using the Exalogic components. Here is a snaphot of the Assets view showing the managable components of a Quarter Rack with 8 Compute Nodes: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A colleague has recently posted an interesting series of "Exalogic 2.0.1 Tea Break Snippets" which will guide you through the initial steps to get started with setting up your Exalogic environment: Exalogic 2.0.1 Tea Break Snippets - Creating Cloud Users https://blogs.oracle.com/ATeamExalogic/entry/exalogic_2_0_1_tea1 Exalogic 2.0.1 Tea Break Snippets - Creating Networks https://blogs.oracle.com/ATeamExalogic/entry/exalogic_2_0_1_tea2 Exalogic 2.0.1 Tea Break Snippets - Allocating Static IP Addresses https://blogs.oracle.com/ATeamExalogic/entry/exalogic_2_0_1_tea3 Exalogic 2.0.1 Tea Break Snippets - Creating Accounts https://blogs.oracle.com/ATeamExalogic/entry/exalogic_2_0_1_tea4 Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template https://blogs.oracle.com/ATeamExalogic/entry/exalogic_2_0_1_tea5 Have fun reading these very useful postings ! Dr. Jürgen Fleischer , Oracle Enterprise Manager Ops Center Engineering Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Antenna Aligner Part 4: Role'ing in the deep

    - by Chris George
    Since last time I've been trying to sort out the general workflow of the app. It's fundamentally not hard, there is a list of transmitters, you select a transmitter and it shows the compass view. Having done quite a bit of ajax/asp.net/html in the past, I immediately started off by creating two divs within my 'page', one for the list, one for the compass. Then using the onClick event in the list, this will switch the display attribute on the divs. This seemed to work, but did lead to some dodgy transitional redrawing artefacts which I was not happy with. So after some Googling I realised I was doing it all wrong! JQuery mobile has the concept of giving an object in html a data-role. By giving a div the attribute data-role="page" it is then treated as a separate page on the mobile device. Within the code, this is referenced like a html anchor in the form #mypage. Using this system, page transitions such as fade or slide are automatically applied which adds to the whole authenticity of the app! Here is a simple example: . <a href="#'compasspage">compass</a> . <div data-role="page" id="compasspage" data-add-back-btn="true"> But I don't want just a static link, I want to dynamically create my list, and get each list elements to switch to the compass page with the right information. So here is the jquery that I used to dynamically inject new <li> rows into the <ul> block. $('ul').append($('<li/>', {    //here appendin `<li>`     'data-role': "list-divider" }).append($('<a/>', {    //here appending `<a>` into `<li>`     'href': '#compasspage',     'data-transition': 'none',     'onclick': 'selectTx(' + i + ')',     'html': buttonHtml }))); $('ul').listview('refresh'); This is called within a for loop so the first 5 appropriate transmitters are used. There are several things of interest to note here. Firstly, I could not find a more elegant way to tell the target page which transmitter I've clicked on, so I have used the onclick event as well as the href attribute. The onclick event fires 'selectTx' which simply sets a global member variable to the specific index number I've clicked on. Yes it's not nice, but it works. Secondly, the data-transition attribute is set to 'none'. I wanted the transition between the pages to be a whooshy slidey effect. However this worked going to the compass page, but returning to the list page gave some undesirable visual artefacts (flickering, redrawing etc.). So I decided to remove the transitions all together, which was a shame. Thirdly, rather than embedding loads of html into the append command, I removed this out into a variable 'buttonHtml'. Doing this really tidied up my code. Until next time!

    Read the article

  • Convert old AVI files to a modern format

    - by iWerner
    Hi, we have a collection of old home videos that were saved in AVI format a long time ago. I want to convert these files to a more modern format because the Totem Movie Player that comes with Ubuntu 10.4 seems to be the only program capable of playing them. The files seem to be encoded with a MJPEG codec, and playing them in VLC or Windows Media Player plays only the sound but there is no video. Avidemux was able to open the files, but the quality of the video is severely degraded: The video skips frames and is interlaced (it's not interlaced when playing it in Totem). Neither ffmpeg nor mencoder seems to be able to read the video stream. mencoder reports that it is using ffmpeg's codec. Here's a section from its output: ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family [mjpeg @ 0x92a7260]mjpeg: using external huffman table [mjpeg @ 0x92a7260]mjpeg: error using external huffman table, switching back to internal Unsupported PixelFormat -1 Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG) while running ffmpeg produces the following: $ ffmpeg -i input.avi output.avi FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 4 2010 12:35:30, gcc: 4.4.3 [avi @ 0x87952c0]non-interleaved AVI Input #0, avi, from 'input.avi': Duration: 00:00:15.24, start: 0.000000, bitrate: 22447 kb/s Stream #0.0: Video: mjpeg, yuvj422p, 720x544, 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 720x544, q=2-31, 200 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: mp2, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 0 fps= 0 q=0.0 Lsize= 143kB time=15.23 bitrate= 76.9kbits/s video:0kB audio:119kB global headers:0kB muxing overhead 20.101777% So the problem is that output does not contain any video, as evidenced by the video:0kB at the end. In all of the above cases the audio comes out fine. So my question is: What can I do to convert these files to a more modern format with more modern codecs?

    Read the article

  • Simple MVVM Walkthrough – Refactored

    - by Sean Feldman
    JR has put together a good introduction post into MVVM pattern. I love kick start examples that serve the purpose well. And even more than that I love examples that also can pass the real world projects check. So I took the sample code and refactored it slightly for a few aspects that a lot of developers might raise a brow. Michael has mentioned model (entity) visibility from view. I agree on that. A few other items that don’t settle are using property names as string (magical strings) and Saver class internal casting of a parameter (custom code for each Saver command). Fixing a property names usage is a straight forward exercise – leverage expressions. Something simple like this would do the initial job: class PropertyOf<T> { public static string Resolve(Expression<Func<T, object>> expression) { var member = expression.Body as MemberExpression; return member.Member.Name; } } With this, refactoring of properties names becomes an easy task, with confidence that an old property name string will not get left behind. An updated Invoice would look like this: public class Invoice : INotifyPropertyChanged { private int id; private string receiver; public event PropertyChangedEventHandler PropertyChanged; private void OnPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public int Id { get { return id; } set { if (id != value) { id = value; OnPropertyChanged(PropertyOf<Invoice>.Resolve(x => x.Id)); } } } public string Receiver { get { return receiver; } set { receiver = value; OnPropertyChanged(PropertyOf<Invoice>.Resolve(x => x.Receiver)); } } } For the saver, I decided to change it a little so now it becomes a “view-model agnostic” command, one that can be used for multiple commands/view-models. Updated Saver code now accepts an action at construction time and executes that action. No more black magic internal class Command : ICommand { private readonly Action executeAction; public Command(Action executeAction) { this.executeAction = executeAction; } public bool CanExecute(object parameter) { return true; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { // no more black magic executeAction(); } } Change in InvoiceViewModel is instantiation of Saver command and execution action for the specific command. public ICommand SaveCommand { get { if (saveCommand == null) saveCommand = new Command(ExecuteAction); return saveCommand; } set { saveCommand = value; } } private void ExecuteAction() { DisplayMessage = string.Format("Thanks for creating invoice: {0} {1}", Invoice.Id, Invoice.Receiver); } This way internal knowledge of InvoiceViewModel remains in InvoiceViewModel and Command (ex-Saver) is view-model agnostic. Now the sample is not only a good introduction, but also has some practicality in it. My 5 cents on the subject. Sample code MvvmSimple2.zip

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >