Search Results

Search found 685 results on 28 pages for 'experiment'.

Page 24/28 | < Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Good design of mapping Java Domain objects to Tables (using Hibernate)

    - by M. McKenzie
    Hey guys, I have a question that is more in the realm of design, than implementation. I'm also happy for anyone to point out resources for the answer and I'll gladly, research for myself. Highly simplified Java and SQL: Say I have a business domain POJO called 'Picture' with three attributes. class Picture int idPicture String fileName long size Say I have another business domain POJO called "Item" with 3 attributes Class Item int idItem String itemName ArrayList itemPictures These would be a normal simple relationship. You could say that 'Picture' object, will never exist outside an 'Item' object. Assume a picture belongs only to a specific item, but that an item can have multiple pictures Now - using good database design (3rd Normal Form), we know that we should put items and pictures in their own tables. Here is what I assume would be correct. table Item int idItem (primary key) String itemName table Picture int idPicture (primary key) varchar(45) fileName long size int idItem (foreign key) Here is my question: If you are making Hibernate mapping files for these objects. In the data design, your Picture table needs a column to refer to the Item, so that a foreign key relation can be maintained. However,in your business domain objects - your Picture does not hold a reference/attribute to the idItem - and does not need to know it. A java Picture instance is always instantiated inside an Item instance. If you want to know the Item that the Picture belongs to you are already in the correct scope. Call myItem.getIdItem() and myItem.getItemPictures(),and you have the two pieces of information you need. I know that Hibernate tools have a generator that can auto make your POJO's from looking at your database. My problem stems from the fact that I planned out the data design for this experiment/project first. Then when I went to make the domain java objects, I realized that good design dictated that the objects hold other objects in a nested way. This is obviously different from the way that a database schema is - where all objects(tables) are flat and hold no other complex types within them. What is a good way to reconcile this? Would you: (A) Make the hibernate mapping files so that Picture.hbm.xml has a mapping to the POJO parent's idItem Field (if it's even possible) (B) Add an int attribute in the Picture class to refer to the idItem and set it at instantiation, thus simplifying the hbm.xml mapping file by having all table fields as local attributes in the class (C) Fix the database design because it is wrong, dork. I'd truly appreciate any feedback

    Read the article

  • Wordpress installed in root folder, subdomain now not working, GoDaddy host

    - by Kristin
    Hi, please forgive me for being a complete beginner at this, I'd rather not have to try to deal with this myself but as GoDaddy support have not replied after 2 days I'm going to have to. I think my problem is the same as the one above, but I'm not 100% sure, so I'm reposting it, I'm not really confident enough to attempt to try the fixes I've seen here so I need someone to give me baby instructions? Our original website (www.mwpics.com.au) was built in Dreamweaver etc, recently we created a new website in Wordpress, in a subdomain, then migrated it over to the root folder where it is now operating fine. I also moved the files for the old website into another directory which I called 'old', so they're all still there. The problem is that I have a subdomain set up - which is still showing as set up in the control panel on godaddy the url is www.mwpics.com.au/clients and it is at www.clients.mwpics.com.au. This directory contains loads of other directories, each of which is password protected by .htaccess files and which our clients access directly (not through the site) to download their finished work. The test one and the one for random clients is www.mwpics.com.au/clients/temp - username and password both temp (the usernames are all the same as the directory names). Since the WP install to the root directory the /clients extension no longer works (it should bring up an information page which is an .html index page in the directory) and the /clients/name extensions no longer works - it goes back to the wp site with a 'not found' error message. Strangely it does bring up the box for the username and password, but when you enter it it just goes back to the 'not found' message. Someone told me it was the .htaccess file - so as an experiment, I renamed the .htaccess file in the root directory and then copied the .htaccess file from the old root files into the root directory, eureka! It worked - and also the WP site opened to the home page... but bummer - the /pages in the WP site now no longer worked! But at least I know the source of the problem. So I switched it back and this is the status quo - I have no idea how to fix this, and with everyone back at work tomorrow, clients are going to want to start downloading their stuff... Can anyone help me? I'm starting to panic a bit

    Read the article

  • SVG via dynamic XML+XSL

    - by Daniel
    This is a bit of a vague notion which I have been running over in my head, and which I am very curious if there is an elegant method of solving. Perhaps it should be taken as a thought experiment. Imagine you have an XML schema with a corresponding XSL transform, which renders the XML as SVG in the browser. The XSL generates SVG with appropriate Javascript handlers that, ultimately, implement editing-like functionality such that properties of the objects or their locations on the SVG canvas can be edited by the user. For instance, an element can be dragged from one location to another. Now, this isn't particularly difficult - the drag/drop example is simply a matter of changing the (x,y) coordinates of the SVG object, or a resize operation would be a simple matter of changing its width or height. But is there an elegant way to have Javascript work on the DOM of the source XML document instead of the rendered SVG? Why, you ask? Well, imagine you have very complex XSL transforms, where the modification of one property results in complex changes to the SVG. You want to maintain simplicity in your Javascript code, but also a simple way to persist the modified XML back to the server. Some possibilities of how this may function: After modification of the source DOM, simply re-run the XSL transform and replace the original. Downside: brute force, potentially expensive operation. Create id/class naming conventions in the source and target XML/SVG so elements can be related back to each other, and do an XSL transform on only a subset of the new DOM. In other words, modify temporary DOM, apply XSL to it, remove changed elements from SVG, and insert the new one. Downside: May not be possible to apply XSL to temporary in-browser DOMs(?). Also, perhaps a bit convoluted or ugly to maintain. I think that it may be possible to come up with a framework that handles the second scenario, but the challenge would be making it lightweight and not heavily tied to the actual XML schema. Any ideas or other possibilities? Or is there maybe an existing method of doing this which I'm not aware of? UPDATE: To clarify, as I mentioned in a comment below, this aids in separating the draw code from the edit code. For a more concrete example of how this is useful, imagine an element which determines how it is drawn dependent on the value of a property of an adjacent element. It's better to condense that logic directly in the draw code instead of also duplicating it in the edit code.

    Read the article

  • Request URL in Javascript, Fetch URL Content using Java Applet, return to Javascript?

    - by Sam G
    I'm in the process of making a little experiment, it grabs a YouTube page, and returns the highest quality MP4 link, then plays this in a HTML 5 Video element. Now I was using PHP with cURL to get the URL content (YouTube), but that only works on my local server (MP4 link is locked to IP address). I can't think of any other way to get the page content due to cross domain rules except a Java applet. So I've built a Java applet that should return the content of a URL. Java import java.applet.Applet; import java.awt.*; import java.io.ByteArrayOutputStream; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; public class URLFetcherabc extends Applet { public void init() { } public void paint(Graphics g) { g.drawString("Java loaded. Waiting for URL", 0, 10); } public String getURL(String url, String httpMethod) { try { URL u = new URL(url); HttpURLConnection conn = (HttpURLConnection)u.openConnection(); conn.setRequestMethod(httpMethod); InputStream is = conn.getInputStream(); ByteArrayOutputStream output = new ByteArrayOutputStream(); byte[] buffer = new byte[1024]; for (int bytesRead = 0; (bytesRead = is.read(buffer)) != -1; ) { output.write(buffer, 0, bytesRead); } return output.toString(); } catch (Exception e) { }return null; } } Now I've got the applet on the page, but every-time I call the function it returns nothing. Heres my HTML for including the applet. HTML <applet id="URLFetcher" name="URLFetcher" code="URLFetcherabc.class" archive="URLFetcher.jar" height="200" width="200" mayscript=""></applet> Java-Script function fetchurl(urltofetch) { var URLFetcher = document.getElementById("URLFetcher"); var result = URLFetcher.getURL(urltofetch); //Result = URL Content return result; } The function always returns null, in Java the function does work when passed a variable via other means (parameter etc). I've tried running other functions through Javascript and the Java applet does respond. I'm new to Java applets and communicating with them via Javascript, so I'm probably making either a small mistake somewhere or its completely wrong. Any ideas? Thanks

    Read the article

  • GKSession sendDataToAllPeers getting invalid parameter

    - by cb4
    This is my first post and I wanted to start it by thanking the many stackoverflow contributors who will never know how much they helped me out these past several days as I worked to complete my first iOS app. I am indebted to them and to this site for giving them a place to post. To 'pay off' some of that debt, I hope this will help others as I have been helped... So, my app uses the GKPeerPickerController to make a connection to a 2nd device. Once connected, the devices can send text messages to each other. The receiving peer has the message displayed in a UIAlertView. Everything was working fine. Then I decided to experiment with locations and added code to get the current location. I convert it into latitude & longitude in degrees, minutes, and seconds and put them into one NSString. I added a button to my storyboard called 'Send Location' which, when tapped, sends the location to the connected peer. This is where I ran into the problem. Both the send text and send location methods call the sendPacket method with a NSString. sendPacket converts the string to NSData and calls sendDataToAllPeers. When I learned how to capture the error, it was "Invalid parameter for - sendDataToAllPeers:withDataMode:error:". [.....pause.....] Well, this was going to be a question but in writing all this to explain the problem, the answer just dawned on me. Did a few tests and verified it now works. The issue was not in sendDataToAllPeers, it was in the conversion of the NSString (strToSend) to NSData: packet = [strToSend dataUsingEncoding:NSASCIIStringEncoding]; Specifically, it was the degree sign character (little circle, ASCII 176). NSASCIIStringEncoding only includes ASCII characters up to 127 so don't use any above that. I'm sure there was a quicker way to find the problem, but I don't know Objective-C or Xcode's debugging facility well enough yet. Whew! Several hours to discover that little tidbit. I did learn a lot, though, and that's always a good thing!

    Read the article

  • How does the CLR (.NET) internally allocate and pass around custom value types (structs)?

    - by stakx
    Question: Do all CLR value types, including user-defined structs, live on the evaluation stack exclusively, meaning that they will never need to be reclaimed by the garbage-collector, or are there cases where they are garbage-collected? Background: I have previously asked a question on SO about the impact that a fluent interface has on the runtime performance of a .NET application. I was particuarly worried that creating a large number of very short-lived temporary objects would negatively affect runtime performance through more frequent garbage-collection. Now it has occured to me that if I declared those temporary objects' types as struct (ie. as user-defined value types) instead of class, the garbage collector might not be involved at all if it turns out that all value types live exclusively on the evaluation stack. What I've found out so far: I did a brief experiment to see what the differences are in the CIL generated for user-defined value types and reference types. This is my C# code: struct SomeValueType { public int X; } class SomeReferenceType { public int X; } . . static void TryValueType(SomeValueType vt) { ... } static void TryReferenceType(SomeReferenceType rt) { ... } . . var vt = new SomeValueType { X = 1 }; var rt = new SomeReferenceType { X = 2 }; TryValueType(vt); TryReferenceType(rt); And this is the CIL generated for the last four lines of code: .locals init ( [0] valuetype SomeValueType vt, [1] class SomeReferenceType rt, [2] valuetype SomeValueType <>g__initLocal0, // [3] class SomeReferenceType <>g__initLocal1, // why are these generated? [4] valuetype SomeValueType CS$0$0000 // ) L_0000: ldloca.s CS$0$0000 L_0002: initobj SomeValueType // no newobj required, instance already allocated L_0008: ldloc.s CS$0$0000 L_000a: stloc.2 L_000b: ldloca.s <>g__initLocal0 L_000d: ldc.i4.1 L_000e: stfld int32 SomeValueType::X L_0013: ldloc.2 L_0014: stloc.0 L_0015: newobj instance void SomeReferenceType::.ctor() L_001a: stloc.3 L_001b: ldloc.3 L_001c: ldc.i4.2 L_001d: stfld int32 SomeReferenceType::X L_0022: ldloc.3 L_0023: stloc.1 L_0024: ldloc.0 L_0025: call void Program::TryValueType(valuetype SomeValueType) L_002a: ldloc.1 L_002b: call void Program::TryReferenceType(class SomeReferenceType) What I cannot figure out from this code is this: Where are all those local variables mentioned in the .locals block allocated? How are they allocated? How are they freed? Why are so many anonymous local variables needed and copied to-and-fro only to initialize my two local variables rt and vt?

    Read the article

  • Play music on iPhone through computer

    - by Kyle Cronin
    Now that I've had my iPhone for a few months, I'm trying an experiment to see if I can't replace the laptop I carry around with my iPhone + internet connected computer. To this end, I've been trying to find a program that will let me play the music on my iPhone through the hardware and software on the host computer. If I recall correctly this was possible a few years ago with the iPod - Linux software like Rhythmbox and Banshee was able to read the music off an iPod and play it through the speakers. I even thought I recalled iTunes itself being capable of this at one time. Now, however, iTunes greys out/disables the music on my iPhone and I can't find any documented support for the iPhone in any other music program. Is this really no longer possible? Am I limited to using the headphone jack to get music to play? (note: I am using an iPhone 3G with the 3.0 software. I am attempting to play music on computers other than the one I sync with) Several replies mention that I should check "manually manage" to do this. I just tried this on a computer that I don't sync my iPhone to and it asked me to erase and sync, which is obviously something I don't want to do. update: OK, I checked the "Manually manage music and videos" box on a computer that I didn't sync to (now known as "Computer A"), and it told me that I needed to erase & sync to cause the changes to have effect, so I did. At this point I'm guessing that my iPhone thinks that it's syncing with that computer. I copied over a few songs using the autofill feature. At this point, Computer A sees the maybe 10 or so songs I've copied using autofill. I then plug my iPhone into my Macbook ("Computer B") which I've been syncing with. At this point, I'm pretty sure that it still thought that all my synced content was still on my iPhone. The "manually manage music and videos" checkbox isn't checked, so I check it and go through a similar process where iTunes erases the synced content and I copy over a playlist. At this point, there's no trace of the songs that I copied over from Computer A. So I plug my iPhone into Computer A - in the Music section are the handful of songs that I had copied over earlier, greyed out and unplayable. To make sure that this wasn't some sort of caching issue, I plugged my iPhone into my sister's Macbook ("Computer C") and it lists the same few, greyed out songs that I had copied over from Computer A. Plugging into Computer B doesn't reveal these songs at all, only the songs that it copied over (these are playable). A few things: This inconsistent behavior is driving me insane. Why would my iPhone report two versions of its contents to different computers? Is there a way to get a computer to completely forget about an iPhone and just resync everything to get everything into a consistent state? Even if I get the phone into a consistent state, I still can't play the files on my phone anywhere but the computer I sync with, which was my original goal. What am I doing wrong? maybe I should read the fine print before I mess with my iPhone So going over this thread with a fine-toothed comb again yields this lovely tidbit in the Apple docs: Note: Even when manually managing, some content may only be available from one library at time. This includes all content on iPhone and video content on iPods. OK, so manually managing is a dead end on the iPhone. Are there any other options? Any unofficial third-party programs or drivers that will work?

    Read the article

  • apache+mod_wsgi configuration for django project(s) on a quad core

    - by Stefano
    I've been experiment quite some time with a "typical" django setting upon nginx+apache2+mod_wsgi+memcached(+postgresql) (reading the doc and some questions on SO and SF, see comments) Since I'm still unsatisfied with the behavior (definitely because of some bad misconfiguration on my part) I would like to know what a good configuration would look like with these hypotesis: Quad-Core Xeon 2.8GHz 8 gigs memory several django projects (anything special related to this?) These are excerpts form my current confs: apache2 SetEnv VHOST null #WSGIPythonOptimize 2 <VirtualHost *:8082> ServerName subdomain.domain.com ServerAlias www.domain.com SetEnv VHOST subdomain.domain AddDefaultCharset UTF-8 ServerSignature Off LogFormat "%{X-Real-IP}i %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" custom ErrorLog /home/project1/var/logs/apache_error.log CustomLog /home/project1/var/logs/apache_access.log custom AllowEncodedSlashes On WSGIDaemonProcess subdomain.domain user=www-data group=www-data threads=25 WSGIScriptAlias / /home/project1/project/wsgi.py WSGIProcessGroup %{ENV:VHOST} </VirtualHost> wsgi.py import os import sys # setting all the right paths.... _realpath = os.path.realpath(os.path.dirname(__file__)) _public_html = os.path.normpath(os.path.join(_realpath, '../')) sys.path.append(_realpath) sys.path.append(os.path.normpath(os.path.join(_realpath, 'apps'))) sys.path.append(os.path.normpath(_public_html)) sys.path.append(os.path.normpath(os.path.join(_public_html, 'libs'))) sys.path.append(os.path.normpath(os.path.join(_public_html, 'django'))) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() def application(environ, start_response): """ Launches django passing over some environment (domain name) settings """ application_group = environ['mod_wsgi.application_group'] """ wsgi application group is required. It's also used to generate the HOST.DOMAIN.TLD:PORT parameters to pass over """ assert application_group fields = application_group.replace('|', '').split(':') server_name = fields[0] os.environ['WSGI_APPLICATION_GROUP'] = application_group os.environ['WSGI_SERVER_NAME'] = server_name if len(fields) > 1 : os.environ['WSGI_PORT'] = fields[1] splitted = server_name.rsplit('.', 2) assert splitted >= 2 splited.reverse() if len(splitted) > 0 : os.environ['WSGI_TLD'] = splitted[0] if len(splitted) > 1 : os.environ['WSGI_DOMAIN'] = splitted[1] if len(splitted) > 2 : os.environ['WSGI_HOST'] = splitted[2] return _application(environ, start_response)` folder structure in case it matters (slightly shortened actually) /home/www-data/projectN/var/logs /project (contains manage.py, wsgi.py, settings.py) /project/apps (all the project ups are here) /django /libs Please forgive me in advance if I overlooked something obvious. My main question is about the apache2 wsgi settings. Are those fine? Is 25 threads an /ok/ number with a quad core for one only django project? Is it still ok with several django projects on different virtual hosts? Should I specify 'process'? Any other directive which I should add? Is there anything really bad in the wsgi.py file? I've been reading about potential issues with the standard wsgi.py file, should I switch to that? Or.. should this conf just be running fine, and I should look for issues somewhere else? So, what do I mean by "unsatisfied": well, I often get quite high CPU WAIT; but what is worse, is that relatively often apache2 gets stuck. It just does not answer anymore, and has to be restarted. I have setup a monit to take care of that, but it ain't a real solution. I have been wondering if it's an issue with the database access (postgresql) under heavy load, but even if it was, why would the apache2 processes get stuck? Beside these two issues, performance is overall great. I even tried New Relic and got very good average results.

    Read the article

  • SSH-forwarded X11 display from Linux to Mac lost after some time

    - by mklein9
    I have a new and vexing problem with ssh forwarding my X11 connection when logging in from a Mac (10.7.2) to Linux (Ubuntu 8.04). I have no trouble using ssh -X to log in to the remote machine and starting an X11-based application from that shell. What has recently started happening is that additional invocations of X11 applications from that same shell, after a while (on the order of hours), are unable to start because the forwarded display is being blocked (I presume). When attempting to start xterm, for example, I get the usual message about a bad DISPLAY setting, such as: xterm Xt error: Can't open display: localhost:10.0 But the X11 application I started right when I logged in is still running along just fine, using that exact same display (localhost:10.0), just that it was started earlier. I turned on verbose logging in sshd_config and I see this in the /var/log/auth.log file in response to the failed xterm startup attempt: sshd[22104]: channel 8: open failed: administratively prohibited: open failed If I ssh -X to the server again, starting a new shell and getting assigned a new display (localhost:11.0), the same process repeats: the X11 applications started early on run just fine for as long as I keep them open (days), but after a few hours I cannot start any new ones from that shell. Particulars: OpenSSH sshd server running on Ubuntu 8.04, display forwarded to a Mac running Lion (10.7.2) with the default Apple X server. The systems are connected on an Ethernet LAN with a single switch between them. Neither machine is running a firewall. Until recently (a few days ago) this setup worked perfectly so I am mystified as to where to look next. I am by no means an X11 or SSH expert but have good UNIX/Linux experience. Nothing obvious has changed in either client or server configuration although I have tried changing a few options to try to debug this, like setting sshd_config's TCPKeepAlive to no, and setting "host +localhost" (you can tell I've been Googling). When logging in from a Linux 11.10 laptop to the same remote host over the same network and switch, this problem does not occur -- an xterm can be invoked successfully hours later from the same ssh login shell while the same experiment from the Mac fails (tested this morning to be sure), so it would appear to be a Mac-specific issue. With "LogLevel DEBUG3" set on the remote machine (sshd server), and no change made in the client connections by me, /var/log/auth.log shows one slight change in connection status reports overnight, which is the port number used by the one successful ssh session from the Linux machine (I think), connection #7 below: sshd[20173]: debug3: channel 7: status: The following connections are open:\r\n #0 server-session (t4 r0 i0/0 o0/0 fd 14/13 cfd -1)\r\n #3 X11 connection from 127.0.0.1 port 57564 (t4 r1 i0/0 o0/0 fd 16/16 cfd -1)\r\n #4 X11 connection from 127.0.0.1 port 57565 (t4 r2 i0/0 o0/0 fd 17/17 cfd -1)\r\n #5 X11 connection from 127.0.0.1 port 57566 (t4 r3 i0/0 o0/0 fd 18/18 cfd -1)\r\n #6 X11 connection from 127.0.0.1 port 57567 (t4 r4 i0/0 o0/0 fd 19/19 cfd -1)\r\n #7 X11 connection from 127.0.0.1 port 59007 In this report, everything is the same between status reports except the port number used by connection #7 which I believe is the Linux client -- the only one still maintaining a display connection. It continues to increment over time, judging by a sequence of these reports overnight. Thanks for any help, -Mike

    Read the article

  • Single-Signon options for Exchange 2010

    - by freiheit
    We're working on a project to migrate employee email from Unix/open-source (courier IMAP, exim, squirrelmail, etc) to Exchange 2010, and trying to figure out options for single-signon for Outlook Web Access. So far all the options I've found are very ugly and "unsupportable", and may simply not work with Forefront. We already have JA-SIG CAS for token-based single-signon and Shibboleth for SAML. Users are directed to a simple in-house portal (a Perl CGI, really) that they use to sign in to most stuff. We have an HA OpenLDAP cluster that's already synchronized against another AD domain and will be synchronized with the AD domain Exchange will be using. CAS authenticates against LDAP. The portal authenticates against CAS. Shibboleth authenticates with CAS but pulls additional data from LDAP. We're moving in the direction of having web services authenticate against CAS or Shibboleth. (Students are already on SAML/Shibboleth authenticated Google Apps for Education) With Squirrelmail we have a horrible hack linked to from that portal page that authenticates against CAS, gets your original plaintext password (yes, I know, evil), and gives you an HTTP form pre-filled with all the necessary squirrelmail login details with javaScript onLoad stuff to immediately submit the form. Trying to find out exactly what is possible with Exchange/OWA seems to be difficult. "CAS" is both the acronym for our single-signon server and an Exchange component. From what I've been able to tell there's an addon for Exchange that does SAML, but only for federating things like free/busy calendar info, not authenticating users. Plus it costs additional money so there's no way to experiment with it to see if it can be coaxed into doing what we want. Our plans for the Exchange cluster involve Forefront Threat Management Gateway (the new ISA) in the DMZ front-ending the CAS servers. So, the real question: Has anybody managed to make Exchange authenticate with CAS (token-based single-signon) or SAML, or with something I can reasonably likely make authenticate with one of those (such as anything that will accept apache's authentication)? With Forefront? Failing that, anybody have some tips on convincing OWA Forms Based Authentication (FBA) into letting us somehow "pre-login" the user? (log in as them and pass back cookies to the user, or giving the user a pre-filled form that autosubmits like we do with squirrelmail). This is the least-favorite option for a number of reasons, but it would (just barely) satisfy our requirements. From what I hear from the guy implementing Forefront, we may have to set OWA to basic authentication and do forms in Forefront for authentication, so it's possible this isn't even possible. I did find CasOwa, but it only mentions Exchange 2007, looks kinda scary, and as near as I can tell is mostly the same OWA FBA hack I was considering slightly more integrated with the CAS server. It also didn't look like many people had had much success with it. And it may not work with Forefront. There's also "CASifying Outlook Web Access 2", but that one scares me, too, and involves setting up a complex proxy config, which seems more likely to break. And, again, doesn't look like it would work with Forefront. Am I missing something with Exchange SAML (OWA Federated whatchamacallit) where it is possible to configure to do user authentication and not just free/busy access authorization?

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Top 10 Reasons SQL Developer is Perfect for Oracle Beginners

    - by thatjeffsmith
    Learning new technologies can be daunting. If you’ve never used a Mac before, you’ll probably be a bit baffled at first. But, you’re probably at least coming from a desktop computing background (Windows), so you common frame of reference. But what if you’re just now learning to use a relational database? Yes, you’ve played with Access a bit, but now your employer or college instructor has charged you with becoming proficient with Oracle database. Here’s 10 reasons why I think Oracle SQL Developer is the perfect vehicle to help get you started. 1. It’s free No need to break into one of these… No start-up costs, no need to wrangle budget dollars from your company. Students don’t have any money after books and lab fees anyway. And most employees don’t like having to ask for ‘special’ software anyway. So avoid all of that and make sure the free stuff doesn’t suit your needs first. Upgrades are available on a regular base, also at no cost, and support is freely available via our public forums. 2. It will run pretty much anywhere Windows – check. OSX (Apple) – check. Unix – check. Linux – check. No need to start up a windows VM to run your Windows-only software in your lab machine. 3. Anyone can install it There’s no installer, no registry to be updated, no admin privs to be obtained. If you can download and extract files to your machine or USB storage device, you can run it. You can be up and running with SQL Developer in under 5 minutes. Here’s a video tutorial to see how to get started. 4. It’s ubiquitous I admit it, I learned a new word yesterday and I wanted an excuse to use it. SQL Developer’s everywhere. It’s had over 2,500,000 downloads in the past year, and is the one of the most downloaded items from OTN. This means if you need help, there’s someone sitting nearby you that can assist, and since they’re in the same tool as you, they’ll be speaking the same language. 5. Simple User Interface Up-up-down-down-Left-right-left-right-A-B-A-B-START will get you 30 lives, but you already knew that, right? You connect, you see your objects, you click on your objects. Or, you can use the worksheet to write your queries and programs in. There’s only one toolbar, and just a few buttons. If you’re like me, video games became less fun when each button had 6 action items mapped to it. I just want the good ole ‘A’, ‘B’, ‘SELECT’, and ‘START’ controls. If you’re new to Oracle, you shouldn’t have the double-workload of learning a new complicated tool as well. 6. It’s not a ‘black box’ Click through your objects, but also get the SQL that drives the GUI As you use the wizards to accomplish tasks for you, you can view the SQL statement being generated on your behalf. Just because you have a GUI, doesn’t mean you’re ceding your responsibility to learn the underlying code that makes the database work. 7. It’s four tools in one It’s not just a query tool. Maybe you need to design a data model first? Or maybe you need to migrate your Sybase ASE database to Oracle for a new project? Or maybe you need to create some reports? SQL Developer does all of that. So once you get comfortable with one part of the tool, the others will be much easier to pick up as your needs change. 8. Great learning resources available Videos, blogs, hands-on learning labs – you name it, we got it. Why wait for someone to train you, when you can train yourself at your own pace? 9. You can use it to teach yourself SQL Instead of being faced with the white-screen-of-panic, you can visually build your queries by dragging and dropping tables and views into the Query Builder. Yes, ‘just like Access’ – only better. And as you build your query, toggle to the Worksheet panel and see the SQL statement. Again, SQL Developer is not a black box. If you prefer to learn by trial and error, the worksheet will attempt to suggest the next bit of your SQL statement with it’s completion insight feature. And if you have syntax errors, those will be highlighted – just like your misspelled words in your favorite word processor. 10. It scales to match your experience level You won’t be a n00b forever. In 6-8 months, when you’re ready to tackle something a bit more complicated, like XML DB or Oracle Spatial, the tool is already there waiting on you. No need to go out and find the ‘advanced’ tool. 11. Wait, you said this was a ‘Top 10′ list? Yes. Yes, I did. I’m using this ‘trick’ to get you to continue reading because I’m going to say something you might not want to hear. Are you ready? Tools won’t replace experience, failure, hard work, and training. Just because you have the keys to the car, doesn’t mean you’re ready to head out on the race track. While SQL Developer reduces the barriers to entry, it does not completely remove them. Many experienced folks simply do not like tools. Rather, they don’t like the people that pick up tools without the know-how to properly use them. If you don’t understand what ‘TRUNCATE’ means, don’t try it out. Try picking up a book first. Of course, it’s very nice to have your own sandbox to play in, so you don’t upset the other children. That’s why I really like our Dev Days Database Virtual Box image. It’s your own database to learn and experiment with.

    Read the article

  • Twitter traffic might not be what it seems

    - by Piet
    Are you using bit.ly stats to measure interest in the links you post on twitter? I’ve been hearing for a while about people claiming to get the majority of their traffic originating from twitter these days. Now, I’ve been playing with the twitter ruby gem recently, doing various experiments which I’ll not go into detail here because they could be regarded as spamming… if I’d conduct them on a large scale, that is. It’s scary to see people actually engaging with @replies crafted with some regular expressions and eliza-like trickery on status updates found using the twitter api. I’m wondering how Twitter is going to contain the coming spam-flood. When posting links I used bit.ly as url shortener, since this one seems to be the de-facto standard on twitter. A nice thing about bit.ly is that it shows some basic stats about the redirects it performs for your shortened links. To my surprise, most links posted almost immediately resulted in several visitors. Now, seeing that I was posting the links together with some information concerning what the link is about, I concluded that the people who were actually clicking the links should be very targeted visitors. This felt a bit like free adwords, and I suddenly started to understand why everyone was raving about getting traffic from twitter. How wrong I was! (and I think several 1000 online marketers with me) On the destination site I used a traffic logging solution that works by including a little javascript snippet in your pages. It seemed that somehow all visitors disappeared after the bit.ly redirect and before getting to the site, because I was hardly seeing any visitors there. So I started investigating what was happening: by looking at the logfiles of the destination site, and by making my own ’shortened’ urls by doing redirects using a very short domain name I own. This way, I could check the apache access_log before the redirects. Most user agents turned out to be bots without a doubt. Here’s an excerpt of user-agents awk’ed from apache’s access_log for a time period of about one hour, right after posting some links: AideRSS 2.0 (postrank.com) Java/1.6.0_13 Java/1.6.0_14 libwww-perl/5.816 MLBot (www.metadatalabs.com/mlbot) Mozilla/4.0 (compatible;MSIE 5.01; Windows -NT 5.0 - real-url.org) Mozilla/5.0 (compatible; Twitturls; +http://twitturls.com) Mozilla/5.0 (compatible; Viralheat Bot/1.0; +http://www.viralheat.com/) Mozilla/5.0 (Danger hiptop 4.6; U; rv:1.7.12) Gecko/20050920 Mozilla/5.0 (X11; U; Linux i686; en-us; rv:1.9.0.2) Gecko/2008092313 Ubuntu/9.04 (jaunty) Firefox/3.5 OpenCalaisSemanticProxy PycURL/7.18.2 PycURL/7.19.3 Python-urllib/1.17 Twingly Recon twitmatic Twitturly / v0.6 Wget/1.10.2 (Red Hat modified) Wget/1.11.1 (Red Hat modified) Of the few user-agents that seem ‘real’ at first, half are originating from an ip-address used by Amazon EC2. And I doubt people are setting op proxies on there. Oh yeah, Googlebot (the real deal, from a legit google owned address) is sucking up posted links like fresh oysters. I guess google is trying to make sure in advance to never be beaten by twitter in the ‘realtime search’ department. Actually, I think it’d be almost stupid NOT to post any new pages/posts/websites on Twitter, it must be one of the fastest ways to get a Googlebot visit. Same experiment with a real, established twitter account Now, because I was posting the url’s either as ’status’ messages or directed @people, on a test-account with hardly any (human) followers, I checked again using the twitter accounts from a commercial site I’m involved with. These accounts all have between 500 and 1000 targeted (I think) followers. I checked the destination access_logs and also added ‘my’ redirect after the bit.ly redirect: same results, although seemingly a bit higher real visitor/bot ratio. Btw: one of these account was ‘punished’ with a 1 week lock recently because the same (1 one!) status update was sent that was sent right before using another account. They got an email explaining the lock because the account didn’t act according to their TOS. I can’t find anything in their TOS about it, can you? I don’t think Twitter is on the right track punishing a legit account, knowing the trickery I had been doing with it’s api went totally unpunished. I might be wrong though, I often am. On the other hand: this commercial site reported targeted traffic and actual signups from visitors coming from Twitter. The ones that are really real visitors are also very targeted. I’m just not sure if the amount of work involved could hold up against an adwords campaign. Reposting the same link over and over again helps On thing I noticed: It helps to keep on reposting the same links with regular intervals. I guess most people only look at their first page when checking out recent posts of the ones they’re following, or don’t look too far back when performing a search. Now, this probably isn’t according to the twitter TOS. Actually, it might be spamming but no-one is obligated to follow anyone else of course. This way, I was getting more real visitors and less bots. To my surprise (when my programmer’s hat is on) there were still repeated visits from the same bots coming from the same ip-addresses. Did they expect to find something else when visiting for a 2nd or 3rd time? (actually,this gave me an idea: you can’t change a link once it’s posted, but you can change where it redirects to) Most bots were smart enough not to follow the same link again though. Are you successful in getting real visitors from Twitter? Are you only relying on bit.ly to provide traffic stats?

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SSIS Lookup component tuning tips

    - by jamiet
    Yesterday evening I attended a London meeting of the UK SQL Server User Group at Microsoft’s offices in London Victoria. As usual it was both a fun and informative evening and in particular there seemed to be a few questions arising about tuning the SSIS Lookup component; I rattled off some comments and figured it would be prudent to drop some of them into a dedicated blog post, hence the one you are reading right now. Scene setting A popular pattern in SSIS is to use a Lookup component to determine whether a record in the pipeline already exists in the intended destination table or not and I cover this pattern in my 2006 blog post Checking if a row exists and if it does, has it changed? (note to self: must rewrite that blog post for SSIS2008). Fundamentally the SSIS lookup component (when using FullCache option) sucks some data out of a database and holds it in memory so that it can be compared to data in the pipeline. One of the big benefits of using SSIS dataflows is that they process data one buffer at a time; that means that not all of the data from your source exists in the dataflow at the same time and is why a SSIS dataflow can process data volumes that far exceed the available memory. However, that only applies to data in the pipeline; for reasons that are hopefully obvious ALL of the data in the lookup set must exist in the memory cache for the duration of the dataflow’s execution which means that any memory used by the lookup cache will not be available to be used as a pipeline buffer. Moreover, there’s an obvious correlation between the amount of data in the lookup cache and the time it takes to charge that cache; the more data you have then the longer it will take to charge and the longer you have to wait until the dataflow actually starts to do anything. For these reasons your goal is simple: ensure that the lookup cache contains as little data as possible. General tips Here is a simple tick list you can follow in order to tune your lookups: Use a SQL statement to charge your cache, don’t just pick a table from the dropdown list made available to you. (Read why in SELECT *... or select from a dropdown in an OLE DB Source component?) Only pick the columns that you need, ignore everything else Make the database columns that your cache is populated from as narrow as possible. If a column is defined as VARCHAR(20) then SSIS will allocate 20 bytes for every value in that column – that is a big waste if the actual values are significantly less than 20 characters in length. Do you need DT_WSTR typed columns or will DT_STR suffice? DT_WSTR uses twice the amount of space to hold values that can be stored using a DT_STR so if you can use DT_STR, consider doing so. Same principle goes for the numerical datatypes DT_I2/DT_I4/DT_I8. Only populate the cache with data that you KNOW you will need. In other words, think about your WHERE clause! Thinking outside the box It is tempting to build a large monolithic dataflow that does many things, one of which is a Lookup. Often though you can make better use of your available resources by, well, mixing things up a little and here are a few ideas to get your creative juices flowing: There is no rule that says everything has to happen in a single dataflow. If you have some particularly resource intensive lookups then consider putting that lookup into a dataflow all of its own and using raw files to pass the pipeline data in and out of that dataflow. Know your data. If you think, for example, that the majority of your incoming rows will match with only a small subset of your lookup data then consider chaining multiple lookup components together; the first would use a FullCache containing that data subset and the remaining data that doesn’t find a match could be passed to a second lookup that perhaps uses a NoCache lookup thus negating the need to pull all of that least-used lookup data into memory. Do you need to process all of your incoming data all at once? If you can process different partitions of your data separately then you can partition your lookup cache as well. For example, if you are using a lookup to convert a location into a [LocationId] then why not process your data one region at a time? This will mean your lookup cache only has to contain data for the location that you are currently processing and with the ability of the Lookup in SSIS2008 and beyond to charge the cache using a dynamically built SQL statement you’ll be able to achieve it using the same dataflow and simply loop over it using a ForEach loop. Taking the previous data partitioning idea further … a dataflow can contain more than one data path so why not split your data using a conditional split component and, again, charge your lookup caches with only the data that they need for that partition. Lookups have two uses: to (1) find a matching row from the lookup set and (2) put attributes from that matching row into the pipeline. Ask yourself, do you need to do these two things at the same time? After all once you have the key column(s) from your lookup set then you can use that key to get the rest of attributes further downstream, perhaps even in another dataflow. Are you using the same lookup data set multiple times? If so, consider the file caching option in SSIS 2008 and beyond. Above all, experiment and be creative with different combinations. You may be surprised at what works. Final  thoughts If you want to know more about how the Lookup component differs in SSIS2008 from SSIS2005 then I have a dedicated blog post about that at Lookup component gets a makeover. I am on a mini-crusade at the moment to get a BULK MERGE feature into the database engine, the thinking being that if the database engine can quickly merge massive amounts of data in a similar manner to how it can insert massive amounts using BULK INSERT then that’s a lot of work that wouldn’t have to be done in the SSIS pipeline. If you think that is a good idea then go and vote for BULK MERGE on Connect. If you have any other tips to share then please stick them in the comments. Hope this helps! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Turn A Flash Drive Into a Portable Web Server

    - by Matthew Guay
    Portable applications are very useful for getting work done on the go, but how about portable servers?  Here’s how you can turn your flash drive into a portable web server. Getting Started To put a full web server on our flash drive, we’re going to use XAMPP Lite.  This lightweight, preconfigured server includes recent versions of Apache, MySQL, and PHP so you can run most websites and webapps directly from it.  You could use the full XAMPP, which includes more features such as a FileZilla FTP server and OpenSSL, but for most purposes, the light version is plenty for a portable server. Download the latest version of XAMPP Lite (link below).  In this tutorial, we used the self-extracting EXE version; you could choose the ZIP file and extract the files yourself, but we found it easier to use the executable. Run the installer, and click Browse choose where to install your server. Select your flash drive, or a folder in it, and click Ok.  Make sure your flash drive has at least 250MB of available storage space.  XAMPP will create an xampplite folder and store all the files in it during the installation.   Click Install, and all of the files will be extracted to your flash drive.  This may take a few moments depending on your flash drive’s speed. When the extraction process is finished, a Command Prompt window will open to finish the installation.  The first prompt will ask if you want to add shortcuts to the start menu and desktop; enter “n” since we don’t want to create start menu links to our portable server. Now enter “y” to configure XAMPP’s directories automatically. Finally, enter “y” to make XAMPP fully portable.  It will set up the servers to run without specific drive letters so your server will run from any computer. XAMPP will finalize your changes; press Enter when everything is completed. Setup will automatically launch the command line version of XAMPP.  On first run, confirm that your time zone is correct. And that’s it!  You can now run XAMPP’s control panel by entering 1, or you can exit and run XAMPP from any other computer with your flash drive. To complete your portable webserver kit, you may want to install Portable Firefox or Iron Browser on your flash drive so you always have your favorite browser ready to use. Running your portable server Using your portable server is very simple.  Open the xampplite folder on your flash drive and launch xampp-control.exe. Click Start beside Apache and MySql to get your webserver running. Please note: Do not check the Svc box, as this will run the server as a Windows service.  To keep XAMPP portable, you do not want it running as a service! Windows Firewall may prompt you that it blocked the server; click Allow access to let your server run. Once they’re running, you can click Admin to open the default XAMPP admin page running from your local webserver.  Or, you can view it by browsing to http://localhost/ or http://127.0.0.1/ in your browser. If everything is working correctly, you should see this page in your browser.  Choose your default language… And then you’ll see the default XAMPP admin page.   Click the Status link on the left sidebar to make sure everything is running correctly. If you click the Admin button for MySql in the XAMPP Control Panel, it will open phpMyAdmin in your default browser.  Alternately, you can open the MySql admin page by entering http://localhost/phpmyadmin/ or http://127.0.0.1/phpmyadmin/ in your favorite browser. Now you can add your own webpages to your webserver.  Save all of your web files in the \xampplight\htdocs\ folder on your flash drive. Install WordPress in your portable server Since XAMPP Lite includes MySql and PHP, you can even run webapps such as WordPress, the popular CMS and blogging platform.  Download WordPress (link below), and extract the files to the \xampplite\htdocs folder on your flash drive. Now all of the WordPress files are stored in \xampplite\htdocs\wordpress on your flash drive. We still need to setup WordPress on our portable server.  Open your MySql admin page http://localhost/phpmyadmin/ to create a new database for WordPress.  Enter a name for your database in the “Create new database” box, and click Create. Click the Privileges tab on the top, and the select “Add a new User”.   Enter a username and password for the database, and then click the Go button on the bottom of the page. Using WordPress Now, in your browser, enter http://localhost/wordpress/wp-admin/install.php.  Click Create a Configuration File to continue. Make sure you have your Database name, username, and password we created previously, and click “Let’s Go!” Enter your WordPress database name, username, and password, leave the other two entries as default, and click Submit. You should now have the database all ready to go.  Click “Run the install” to finish installing WordPress. Enter a title, username, and password for your test blog, as well as your email address, and then click “Install WordPress”. You now have a portable install of WordPress.  Click “Log In” to  access your WordPress admin page. Enter your username and password, and click Log In. Here you can add pages, posts, themes, extensions, and anything else just like you would on a normal WordPress site.  This is a great way to experiment with WordPress without messing up your real website. You can view your portable WordPress site by entering http://localhost/wordpress/ in your address bar. Closing your server When you’re done running your test server, click the Stop button on each of the services and then click the Exit button in the XAMPP control panel.  If you press the exit button on the top of the window, it will just minimize the control panel to the tray.   Alternately, you can shutdown your server by running xampp_stop.exe from your xampplite folder. Conclusion XAMPP Lite gives you a great way to run a full webserver directly from your flash drive.  Now, anywhere you go, you can test and tweak your webpages and webapps from any Windows computer.  Links Download XAMPP Lite Download WordPress Similar Articles Productive Geek Tips BitLocker To Go Encrypts Portable Flash Drives in Windows 7How To Use BitLocker on Drives without TPMSpeed up Your Windows Vista Computer with ReadyBoostView and Manage Flash Cookies the Easy WayInstall and Run Applications from Your iPod, Flash Drive or Mp3 Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error

    Read the article

  • Quick guide to Oracle IRM 11g: Creating your first sealed document

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThe previous articles in this guide have detailed how to install, configure and secure your Oracle IRM 11g service. This article walks you through the process of now creating your first context and securing a document against it. I should mention that it would be worth reviewing the following to ensure your installation is ready for that all important first document. Ensure you have correctly configured the keystore for the IRM wrapper keys. If this is not correctly configured, creating the context below will fail. Make sure the IRM server URL correctly resolves and uses the right protocol (HTTP or HTTPS) ContentsCreate the first contextInstall the Oracle IRM Desktop Seal your first document Create the first contextIn Oracle 11g there is a built in classification and rights system called the "standard rights model" which is based on 10 years of customer use cases and innovation. It is a system which enables IRM to scale massively whilst retaining the ability to balance security and usability and also separate duties by allowing contacts in the business to own classifications. The final article in this guide goes into detail on this inbuilt classification model, but for the purposes of this current article all we need to do is create at least one context to test our system out.With a new IRM server there are a set of predefined context templates and roles which again are setup in a way which reflects the most common use we've learned from our customers. We will use these out of the box configurations as they are to create the first context against which we will seal some content.First login to your Oracle IRM Management Website located at https://irm.company.com/irm_rights/. Currently the system is only configured to use the built in LDAP for users, so use the only account we have at the moment, which by default is weblogic. Once logged in switch to the Contexts tab. Click on the New Context icon () in the menu bar on the left. In the resulting dialog select the Standard context template and enter in a name for the context. Then just hit finish, the weblogic account will automatically be made the manager. You'll now see your brand new context ready for users to be assigned. Now click on the Assign Role icon () in the menu bar and in the resulting dialog search for your only user account, weblogic, and add to the list on the right. Now select a role for this user. Because we need to create a document with this user we must select contributor, as this is the only role which allows for the ability to seal. Finally hit next and then finish. We now have a context with a user that has the rights to create a document. The next step is to configure the IRM Desktop to get these rights from the server. Install the Oracle IRM Desktop Before we can seal a document we need the client software installed. Oracle IRM has a very small, lightweight client called the Oracle IRM Desktop which can be freely downloaded in 27 languages from here. Double click on the installer and click on next... Next again... And finally on install... Very easy. You may get a warning about closing Outlook, Word or another application and most of the time no reboots are required. Once it is installed you will see the IRM Desktop icon running in your tool tray, bottom right of the desktop. Seal your first document Finally the prize is within reach, creating your first sealed document. The server is running, we've got a context ready, a user assigned a role in the context but there is the simple and obvious hoop left to jump through. To seal a document we need to have the users rights cached to the local machine. For this to take place, the IRM Desktop needs to know where the Oracle IRM server is on the network so we can synchronize these rights and then be able to seal a document. The usual way for the IRM Desktop to know about the IRM server is it learns automatically when you open an existing piece of content that someone has sent you... ack. Bit of a chicken or the egg dilemma. The solution is to manually tell the IRM Desktop the location of the IRM Server and then force a synchronization of rights. Right click on the Oracle IRM Desktop icon in the system tray and select Options.... Then switch to the Servers tab in the resulting dialog. There are no servers in the list because you've never opened any content. This list is usually populated automatically but we are going to add a server manually, so click on New.... Into the dialog enter in the full URL to the IRM server. Note that this time you use the path /irm_desktop/ and not /irm_rights/. You can see an example from the image below. Click on the validate button and you'll be asked to authenticate. Enter in your weblogic username and password and also check the Remember my password check box. Click OK and the IRM Desktop will confirm a successful connection to the server. OK all the dialogs and we are ready to Synchronize this users rights to the desktop. Right click once more on the Oracle IRM Desktop icon in the system tray. Now the Synchronize menu option is available. Select this and the IRM Desktop will now talk to the IRM server, authenticate using your weblogic account and get your rights to the context we created. Because this is the first time this users has communicated with the IRM server the IRM Desktop presents a privacy policy dialog. This is a chance for the business to ask users to agree to any policy about the use of IRM before opening secured documents. In our guide we've not bothered to setup this URL so just click on the check box and hit Accept. The IRM Desktop will then talk to the server, get your rights and display a success dialog. Lets protect a documentNow we are ready to seal a piece of content. In my guide i'm going to protect a Microsoft Word document. This mean's I have to have copy of Office installed, in this guide i'm using Microsoft Office 2007. You could also seal a PDF document, you'll need to download and install Adobe Acrobat Reader. A very simple test could be to seal a GIF/JPG/PNG or piece of HTML because this is rendered using Internet Explorer. But as I say, i'm going to protect a Word document. The following example demonstrates choosing a file in Windows Explorer, there are many ways to seal a file and you can watch a few in this video.Open a copy of Windows Explorer and locate the file you wish to seal. Right click on the document and select Seal To -> Context You are now presented with the Select Context dialog. You'll now have a sealed copy of the document sat in the same location. Double click on this document and it will open, again using the credentials you've already provided. That is it, now you just need to add more users, more documents, more classifications and start exploring the different roles and experiment with different offline periods etc. You may wish to setup the server against an existing LDAP or Active Directory environment instead of using the built in WebLogic LDAP store. You can read how to use your corporate directory here. But before we finish this guide, there is one more article and arguably the most important article of all. Next I discuss the all important decision making surrounding the actually implementation of Oracle IRM inside your business. Who has rights to what? How do you map contexts to your existing business practices? It is the next article which actually ensures you deploy a successful IRM solution by looking at the business and understanding how they use your sensitive information and then configuring Oracle IRM to reflect their use.

    Read the article

  • TFS, G.I. Joe and Under-doing

    If I were to rank the most consistently irritating parts of my work day, using TFS would come in first by a wide margin. Even repeated network outages this week seem like a pleasant reprieve from this monolithic beast. This is not a reflexive anti-Microsoft feeling, that attitude just wouldnt work for a consultant who does .NET development. It is also not an utter dismissal of TFS as worthless; Ive seen people use it effectively on several projects. So why? Ill start with a laundry list of shortcomings. An out of the box UI for work items that is insultingly bad, a source control system that is confoundingly fragile when handling merges, folder renames and long file names, the arcane XML wizardry necessary to customize a template and a build system that adds an extra layer of oddness on top of msbuild. Im sure my legion of readers will soon point out to me how I can work around all these issues, how this is fixed in TFS 2010 or with this add-in, and how once you have everything set up, youre fine. And theyd be right, any one of these problems could be worked around. If not dirty laundry, what else? I thought about it for a while, and came to the conclusion that TFS is so irritating to me because it represents a vision of software development that I find unappealing. To expand upon this, lets start with some wisdom from those great PSAs at the end of the G.I. Joe cartoons of the 80s: Now you know, and knowing is half the battle. In software development, Id go further and say knowing is more than half the battle. Understanding the dimensions of the problem you are trying to solve, the needs of the users, the value that your software can provide are more than half the battle. Implementation of this understanding is not easy, but it is not even possible without this knowledge. Assuming we have a fixed amount of time and mental energy for any project, why does this spell trouble for TFS? If you think about what TFS is doing, its offering you a huge array of options to track the day to day implementation of your project. From tasks, to code churn, to test coverage. All valuable metrics, but only in exchange for valuable time to get it all working. In addition, when you have a shiny toy like TFS, the temptation is to feel obligated to use it. So the push from TFS is to encourage a project manager and team to focus on process and metrics around process. You can get great visibility, and graphs to show your project stakeholders, but none of that is important if you are not implementing the right product. Not just unimportant, these activities can be harmful as they drain your time and sap your creativity away from the rest of the project. To be more concrete, lets suppose your organization has invested the time to create a template for your projects and trained people in how to use it, so there is no longer a big investment of time for each project to get up and running. First, Id challenge if that template could be specific enough to be full featured and still applicable for any project. Second, the very existence of this template would be a indication to a project manager that the success of their project was somehow directly related to fitting management of that project into this format. Again, while the capabilities are wonderful, the mirage is there; just get everything into TFS and your project will run smoothly. Ill close the loop on this first topic by proposing a thought experiment. Think of the projects youve worked on. How many times have you been chagrined to discover youve implemented the wrong feature, misunderstood how a feature should work or just plain spent too much time on a screen that nobody uses? That sounds like a really worthwhile area to invest time in improving. How about going back to these projects and thinking about how many times you wished you had optimized the state change flow of your tasks or been embarrassed to not have a code churn report linked back to the latest changeset? With thanks to the Real American Heroes, Ill move on to a more current influence, that of the developers at 37signals, and their philosophy towards software development. This philosophy, fully detailed in the books Getting Real and Rework, is a vision of software that under does the competition. This is software that is deliberately limited in functionality in order to concentrate fully on making sure ever feature that is there is awesome and needed. Why is this relevant? Well, in one of those fun seeming paradoxes in life, constraints can be a spark for creativity. Think Twitter, the small screen of an iPhone, the limitations of HTML for applications, the low memory limits of older or embedded system. As long as there is some freedom within those constraints, amazing things emerge. For project management, some of the most respected people in the industry recommend using just index cards, pens and tape. They argue that with change the constant in software development, your process should be as limited (yet rigorous) as possible. Looking at TFS, this is not a system designed to under do anybody. It is a big jumble of components and options, with every feature you could think of. Predictably this means many basic functions are hard to use. For task management, many people just use an Excel spreadsheet linked up to TFS. Not a stirring endorsement of the tooling there. TFS as a whole would be far more appealing to me if there was less of it, but better. Id cut 50% of the features to make the other half really amaze and inspire me. And thats really the heart of the matter. TFS has great promise and I want to believe it can work better. But ultimately it focuses your attention on a lot of stuff that doesnt really matter and then clamps down your creativity in a mess of forms and dialogs obscuring what does.   --- Relevant Links --- All those great G.I. Joe PSAs are on YouTube, including lots of mashed up versions. A simple Google search will get you on the right track.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • .NET 4.5 is an in-place replacement for .NET 4.0

    - by Rick Strahl
    With the betas for .NET 4.5 and Visual Studio 11 and Windows 8 shipping many people will be installing .NET 4.5 and hacking away on it. There are a number of great enhancements that are fairly transparent, but it's important to understand what .NET 4.5 actually is in terms of the CLR running on your machine. When .NET 4.5 is installed it effectively replaces .NET 4.0 on the machine. .NET 4.0 gets overwritten by a new version of .NET 4.5 which - according to Microsoft - is supposed to be 100% backwards compatible. While 100% backwards compatible sounds great, we all know that 100% is a hard number to hit, and even the aforementioned blog post at the Microsoft site acknowledges this. But there's so much more than backwards compatibility that makes this awkward at best and confusing at worst. What does ‘Replacement’ mean? When you install .NET 4.5 your .NET 4.0 assemblies in the \Windows\.NET Framework\V4.0.30319 are overwritten with a new set of assemblies. You end up with overwritten assemblies as well as a bunch of new ones (like the new System.Net.Http assemblies for example). The following screen shot demonstrates system.dll on my test machine (left) running .NET 4.5 on the right and my production laptop running stock .NET 4.0 (right):   Clearly they are different files with a difference in file sizes (interesting that the 4.5 version is actually smaller). That’s not all. If you actually query the runtime version when .NET 4.5 is installed with with Environment.Version you still get: 4.0.30319 If you open the properties of System.dll assembly in .NET 4.5 you'll also see: Notice that the file version is also left at 4.0.xxx. There are differences in build numbers: .NET 4.0 shows 261 and the current .NET 4.5 beta build is 17379. I suppose you can use assume a build number greater than 17000 is .NET 4.5, but that's pretty hokey to say the least. There’s no easy or obvious way to tell whether you are running on 4.0 or 4.5 – to the application they appear to be the same runtime version. And that is what Microsoft intends here. .NET 4.5 is intended as an in-place upgrade. Compile to 4.5 run on 4.0 – not quite! You can compile an application for .NET 4.5 and run it on the 4.0 runtime – that is until you hit a new feature that doesn’t exist on 4.0. At which point the app bombs at runtime. Say you write some code that is mostly .NET 4.0, but only has a few of the new features of .NET 4.5 like aync/await buried deep in the bowels of the application where it only fires occasionally. .NET will happily start your application and run everything 4.0 fine, until it hits that 4.5 code – and then crash unceremoniously at runtime. Oh joy! You can .NET 4.0 applications on .NET 4.5 of course and that should work without much fanfare. Different than .NET 3.0/3.5 Note that this in-place replacement is very different from the side by side installs of .NET 2.0 and 3.0/3.5 which all ran on the 2.0 version of the CLR. The two 3.x versions were basically library enhancements on top of the core .NET 2.0 runtime. Both versions ran under the .NET 2.0 runtime which wasn’t changed (other than for security patches and bug fixes) for the whole 3.x cycle. The 4.5 update instead completely replaces the .NET 4.0 runtime and leaves the actual version number set at v4.0.30319. When you build a new project with Visual Studio 2011, you can still target .NET 4.0 or you can target .NET 4.5. But you are in effect referencing the same set of assemblies for both regardless which version you use. What's different is the compiler used to compile and link your code so compiling with .NET 4.0 gives you just the subset of the functionality that is available in .NET 4.0, but when you use the 4.5 compiler you get the full functionality of what’s actually available in the assemblies and extra libraries. It doesn’t look like you will be able to use Visual Studio 2010 to develop .NET 4.5 applications. Good news – Bad news Microsoft is trying hard to experiment with every possible permutation of releasing new versions of the .NET framework apparently. No two updates have been the same. Clearly updating to a full new version of .NET (ie. .NET 2.0, 4.0 and at some point 5.0 runtimes) has its own set of challenges, but doing an in-place update of the runtime and then not even providing a good way to tell which version is installed is pretty whacky even by Microsoft’s standards. Especially given that .NET 4.5 includes a fairly significant update with all the aysnc functionality baked into the runtime. Most of the IO APIs have been updated to support task based async operation which significantly affects many existing APIs. To make things worse .NET 4.5 will be the initial version of .NET that ships with Windows 8 so it will be with us for a long time to come unless Microsoft finally decides to push .NET versions onto Windows machines as part of system upgrades (which currently doesn’t happen). This is the same story we had when Vista launched with .NET 3.0 which was a minor version that quickly was replaced by 3.5 which was more long lived and practical. People had enough problems dealing with the confusing versioning of the 3.x versions which ran on .NET 2.0. I can’t count the amount support calls and questions I’ve fielded because people couldn’t find a .NET 3.5 entry in the IIS version dialog. The same is likely to happen with .NET 4.5. It’s all well and good when we know that .NET 4.5 is an in-place replacement, but administrators and IT folks not intimately familiar with .NET are unlikely to understand this nuance and end up thoroughly confused which version is installed. It’s hard for me to see any upside to an in-place update and I haven’t really seen a good explanation of why this approach was decided on. Sure if the version stays the same existing assembly bindings don’t break so applications can stay running through an update. I suppose this is useful for some component vendors and strongly signed assemblies in corporate environments. But seriously, if you are going to throw .NET 4.5 into the mix, who won’t be recompiling all code and thoroughly test that code to work on .NET 4.5? A recompile requirement doesn’t seem that serious in light of a major version upgrade.  Resources http://blogs.msdn.com/b/dotnet/archive/2011/09/26/compatibility-of-net-framework-4-5.aspx http://www.devproconnections.com/article/net-framework/net-framework-45-versioning-faces-problems-141160© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Building Great-Looking, Usable Apps: A two-day workshop applying Oracle’s best UX practices in ADF

    - by mvaughan
    By Misha Vaughan, Oracle Applications User ExperienceI have been with Oracle for more than 12 years. It is a company that has granted me extraordinary creative freedom to help deliver compelling experiences for customers.I am beyond proud to talk about one of the experiences we just took for a test drive. Recently, we delivered a first-of-its-kind, three-team collaboration, train-the-trainer event in Reading, U.K., on building great-looking, usable apps based on Oracle Fusion Applications -- using the ADF tool kit. A new kind of workshopKevin Li, Platform Product Director, asked the Oracle Applications User Experience VP, Jeremy Ashley, if the team had anything to help partners and customers build applications that looked like Fusion. He was receiving this request from European partners and customers.Some quick conversations ensued, and the idea for the workshop was born: We would conduct an experiment.  We would work with feedback from the key Platform Technology Solutions (PTS) trainers under Andre Pavanello, Director, Platform Technology Solutions, in Europe, Middle East, and Africa. We would partner with the ADF team lead by Grant Ronald, Director of Product Management, title> and leverage the Applications UX expertise in Ashley’s team.The goal: Create a pilot workshop that in two days would explain to an ADF developer how to leverage the next-generation user experience best-practices developed for Fusion Apps. Why? Customers who need integrations with Oracle Fusion Applications, who are looking for custom applications that need to co-exist with Fusion, or who quite simply want a next-generation design for a custom app, need their solutions to reflect the next-generation research and design.Building an event for an ADF developerThe biggest hurdle was figuring out where to start.  How far into user experience country do you take an ADF developer? How far into ADF do you need to go if you are a UX professional?After some time in the UX kitchen, the workshop recipe looked like this: Mix equal parts: Fusion user experience design principles and functional design patterns The art and science behind UX How to wireframe designs that you can build in Fusion How to translate those designs into an ADF application Ultan O’Broin, Director of Global User Experience, explaining the trouble ticket wireframe design exerciseLynn Munsinger, Senior Group Product Manager, explaining the follow-on trouble ticket ADF coding exercise For spice, add:•    Debra Lilley, Fujitsu and ACE director, showcasing some of the latest ADF design work in the new face of Fusion Applications •    Partner show-and-tell of example apps they have built with FMW and ADF that are dynamic, beautiful, and interactive.Debra Lilley, Oracle ACE Director and Fujitsu Fusion Champion on the new face of Fusion built with ADF and Fusion extensibility with composers as a window into “the possible”?The taste testThis first go-round of the workshop was aimed squarely at ADF developers and partners.  We were privileged to have participation and feedback from:•    Sten Vesterli, Scott/Tiger S. A., Denmark•    John Sim, Fishbowl Solutions, UK•    Josef Huber, Primus Delphi Group, Munich•    Thaddaus Weindl, Primus Delphi, Group , Munich•    Praveen Pillalamarri, EiS Technologies, Bangalore•    Balaji Kamepalli, EiS Technologies, Bangalore•    Plinio Arbizu, Services & Processes Solutions S. A., Mexico•    Yannick Ongena, infoMENTUM, UK•    Jakub Ciszek, infoMENTUM, UK•    Mauro Flores, infoMENTUM, UK•    Matteo Formica, infoMENTUM, UKRichard Bingham, Oracle, Mauro Flores and Matteo Formica, infoMENTUMWhy is this so exciting?  Oracle has invested heavily in the research and development of the Oracle Fusion Applications user experience. This investment has been and continues to be applied across the product lines. Now, we finally get to teach customers and partners how to take advantage of this investment for custom solutions.This event was a pilot to test-drive the content, as well as a train-the-trainer event that our EMEA colleagues will be using with partners who want to build with Fusion Apps design patterns.What did attendees think?"I liked most the science stuff, like eye-tracking, design patterns and best-practice (color, contrast),” Josef Huber said. “It was a very good introduction to UI design, and most developers and project managers are very bad in that.  So this course would be good for all developers and even project managers." Team Anonymous: John Sim, Fishbowl Solutions, Flavius Sana, Oracle, Josef Huber, infoMENTUM, Mireille Duroussaud, Oracle. Winners of the wireframing design exercise.  Sten Vesterli, of Scott/Tiger, said he attended to learn techniques he could use in his own projects. He wants to ensure that his applications better meet the needs of his users, and he said sessions during the workshop on user interface design and wireframing were most useful to him.  “Go to this event to learn the art and science of good user interfaces from people who really know how to do it,” he said.Sten Vesterli, Scott/Tiger, Angelo Santagata, Oracle Plinio Arbizu said the workshop fulfilled his goals, thanks to the recommendations given in how to design user interfaces to facilitate the adoption of applications among the final users. “The workshop combined these recommendations with an exercise that improved the technical comprehension, permitting the usage of JDeveloper to set forth our solutions,” he said. He added: “The first session that I really enjoyed was the five Fusion design principles. It was incredible to discover how these simple principles were included in an inherit manner in Fusion Applications, and I had been using many of them applying only ADF components.  Another topic that I enjoyed a lot was the eight recommendations about the visual design of UIs. The issues that were raised in that lesson are unknown to the developers and of great value to achieve an attractive presentation layer to the end users.  Participate in this workshop, and include these usability features in your projects and in this manner not only to facilitate and improve the user productivity, but also to distinguish you as a professional who takes advantage fully of the functionalities offered by Oracle technology. Praveen Pillalamarri came to the workshop to learn about the difficulties faced in UI and UX development, and how this can be resolved with the help of ADF.  He also appreciated the opportunity to talk with other individuals who came to the workshop. Pillalmarri said, “The way we looked at things in terms of work and projects were sharpened.  UI and UX design knowledge shared by you was quite interesting, especially the minute things which we ignored in the UI or UX design.” Plinio Arbizu, Services & Processes Solutions S. A., Richard Bingham, Oracle, Balaji Kamepalli, & Praveen Pillalamarri, EiS TechnologiesReady to spread the wordIn EMEA, Oracle customers and partners have access to three world-class trainers via Platform Technology Solutions: Mireille Duroussaud, Flavius Sana, and Angelo Santagata. Contact Andre Pavanello if you like to experience this workshop firsthand, or you have customers or partners who would benefit from the training.We are looking to bring the event to the U.S. in spring 2013. If you have interest in this kind of a workshop, leave a comment below. For those who want to follow the action, join the ADF Enterprise Methodology Group run by Oracle’s Chris Muir. Ask questions and continue with the conversation in this forum, or check blogs.oracle.com/usableapps for topics emerging from the workshop.

    Read the article

  • CodePlex Daily Summary for Thursday, April 08, 2010

    CodePlex Daily Summary for Thursday, April 08, 2010New ProjectsBackUpAnyWhere: BackUpAnyWhereCustomFormbyEndUser: 在项目开发中,经常遇到不同的用户对同一报表有不同要求的情况,有时甚至用户需要从头生成一个报表,在以前可能使用第三方的开发工具来实现。在SQL Server2005中,通过使用Reporting Services可以使最终用户不通过编码,只要了解数据结构就能自行编辑报表。本例使用Adventur...DbExecutor - linq based database executor: IEnumerable based database reader. (linq like primitive sql executor)DeepZoomRenderingPack: A collection of libraries and plug-ins architecture that turns various files (like PDF, PS, etc.) into a "Visual" representation that the DeepZoom ...DotNetNuke Russian Language packs: DNNRussianLP - DotNetNuke Russian Language pack. F# Refactor: Deisgned to bring Code Refactoring capabilities to the F# Language in Visual Studio 2010. Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Invocando Site HTTP e WebService dinamicamente com HTTPWebRequest Passando o SoapAction e Envelope XML Escrito em C# www.biztalkbrasil.c...Jitbit WYSWYG BBCode Editor: "Jitbit WYSIWYG-BBCode" is a browser-based JavaScript-powered WYSIWYG BBCode editorMRDS Services for Phidgets: MRDS (Microsoft Robotics Developer Studio) Services for Phidgets provides additional services for Phidgets sensors and controllers that are not inc...MSBuild Addin: This tool is a simple addin for VisualStudio 2008 used in association with Microsoft MSBuild. It allows you to run MSBuild directly inside Visual S...NISHIL-BizTalk Custom Eventlog Functiod: While testing our maps at times when it fails we cant trace it because we don’t know what the output of the functiods are. Normally in a single ma...Northest GNSG: Supinfo B3C Paris Northest University project. Galego, Neveu, Simon, Geissmann.Oily: Composite application project for oil parameters. It's developed in C#Outlook.Utility: The MSDN article Outlook Customization for Integrating with Enterprise Applications at http://msdn.microsoft.com/en-us/library/Aa479345 has quite a...Particle Plot Pivot: Scan select particle physics experiment web sites for plots and generate a Pivot display for easy browsing.project tca: project tca - translating chat application. Satisfyr: A new way of performing assertions on tests so that they remain agnostic to the underlying test framework, and leverage .NET built-in lambda syntax.sejce2008: jce se course wiki and projects linksSGB Controls: SGB Controls is a set of standard .net controls that include a number of enhancements to make life easier for the developer. These controls incl...Syringe: Syringe is a lightweight service container and dependency injection library designed for use with ASP.NET MVC2. Supported features: Dependency inj...topicbox: topicboxUr-Index: Ur-Index makes it a lot easier to create onomastic indexes for books in pdf format.VietGeeks ZohoDocApis: Implement .NET Zoho Document Apis library to help developer can intergrate Zoho Docs easy with their websitesWebometrics Dashboard: Webometrics Dashboardwebpress: It is a WebBased CMS and Blog platform.WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar makes it easy for WPF developers to use pen input in TabletPC or UMPC applications. The WPF InkCanvas control has drawing, e...WS-TMS: WS GISG HTT TMSNew ReleasesBatterySaver: Version 1.0: Fixed battery increase/decrease events not firing Fixed memory corruption error Added working set trimming (used very sparingly) Fixed poorly rende...Chargify.NET: Chargify.NET 0.65: Added in Transactions, Subscription Re-activation, and finally XML documentation (which has been missing in the previous releases).DbExecutor - linq based database executor: DbExecutor ver.1.0.0.1: renameDotNetNuke Russian Language packs: Russian Language Pack for DotNetNuke 04.09.02: Russian Language Pack for DotNetNuke 04.09.02Encrypted Notes: Encrypted Notes 1.6.3: This is the latest version of Encrypted Notes (1.6.3), with general improvements. It has an installer that will create a directory 'CPascoe' in My ...Invocando WebService e Site HTTP dinamicamente com HTTPWebRequest C#: Código projeto CallSiteHTTP: Código escrito em C#.NET 2.0 - VS2005 Contem: Solution completa(código e executável) XML de configuração - Config.xml ...Jitbit WYSWYG BBCode Editor: Main package: Contains the JS-file, CSS-file and a sample.Live Writer Picasa Plugin: Live Writer Picasa Plugin 1.1.0: Changelog Communication with Picasa Web Albums is done directly via HTTP now (v1.0.0 used Google's GData .NET Libraries) The plugin can search fo...MRDS Services for Phidgets: Phidgets for RDS 2008 R3: First Beta Release This ZIP file contains a web page called Readme_CodePlex.html that explains how to install the RDS Phidgets services for RDS 200...MSBuild Addin: MsBuildAddin-v1.0.0: Initial versionMSBuild Addin: MsBuildAddin-v1.0.0-src.zip: Initial versionOutlook.Utility: Outlook.Utility v1: I have used most of the code in previous projects and seems to be quite stable. Of course the point of open sourcing this is so this project is use...Scrum Dashboard: Scrum Dashboard v3 Alpha 1: Scrum Dashboard v3 is targeting .NET 4, TFS 2010 and the brand new Scrum for Team System v3 process templates. Most of the code has been rewritten ...SharePoint Labs: SPLab4004A-FRA-Level100: SPLab4004A-FRA-Level100 This SharePoint Lab will teach you the 4th best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab5012A-FRA-Level100: SPLab5012A-FRA-Level100 This SharePoint Lab will teach you how to provision a new welcome page (how to change and rename the default.aspx page) on ...Shweet: SharePoint 2010 Team Messaging built with Pex: Shweet Source Code: Although the latest version pex and moles used with this project is not available, we thought it would be useful to provide a download to the source.Syringe: Syringe 1.0: Features Dependency injection on properties of services in container Dependency injection on constructors of services in container ASP.Net Mvc ...Text to HTML: 0.4.1.0: Cambios de la versiónOptimización del código de exportación reduciendo el código. Cambio en el icono de exportación. Añadido menú Seleccionar t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 23: Build 23 Fix: Executing "Blame" through the Solution Explorer on a file opens TortoiseMerge rather than TortoiseBlame. Build 22 (beta) New: Visua...WPF Ink Canvas Toolbar: WPF Ink Canvas Toolbar 1.0: First release - included custom colour selectionMost Popular ProjectsRawrWBFS ManagerMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsGraffiti CMSnopCommerce. Open Source online shop e-commerce solution.RawrShweet: SharePoint 2010 Team Messaging built with Pexpatterns & practices – Enterprise LibraryAcadsysAutoPocoIonics Isapi Rewrite FilterNcqrs Framework - The CQRS framework for .NETFarseer Physics Engine

    Read the article

  • CodePlex Daily Summary for Sunday, May 30, 2010

    CodePlex Daily Summary for Sunday, May 30, 2010New ProjectsAviva Solutions C# Coding Guidelines: A set of C# coding guidelines, coding standards, layout rules, FxCop rulesets and (upcoming) custom FxCop and StyleCop rules for improving the over...BKWork: private project.classbook: du an trong vong 10 ngay. Nhom (Do Bao Linh, Phan Thanh Tai, Nguyen Dang Loc)Du an - 01: Du an mon thay LuongEndNote助手: EndNote Helper is a assistant tool for EndNote, which make your task of reference management more convinent. EndNote 助手是一个用于辅助EndNote进行文献管理的小工具,它可...Evoucher: Simple Evoucher Sales SystemFiddler Delayed Responses Extension: A fiddler extension that help developers delay the delivery of HTML Responses to applications. Some delay user stories: - Delivery of css to HTML ...Generic Entity Model 2: GEM2 is lightweight entity framework for building custom business solutions. It enables rapid approach to entity logic design, while offering out o...GY06: 这个是长大工院于2009年发起的项目,因为种种原因没有完成。JoshDOS: JoshDOS is a command line operating system kernel based off COSMOS. It can be booted from actual hardware and built in Visual Studio using .NET la...PhysicsFMUDeluxe.NET: Este projeto é desenvolvido com o intuíto de treinar o desenvolvimento de aplicações em C#. Ele contém ferramentas para cálculos de física e matemá...Reactor Services Platform: Reactor is a service composition and deployment grid that streamlines developing, composing, deploying and managing services. Reactor Services are ...SerafinApartment: Simple MVC 2 application for apartment rental. It is multingual booking system, that allows users to register, book and subscribe for notifications...Silverlight Audio Effect Box: This is a C# Silverlight 4 sample application which process audio sample in near real time. It allows to capture the default audio input device and...Smart Voice: Smart Voice let's you control Skype using your voice. It allows you to write messages, issue phone calls, etc. This application was developed think...WebDotNet - The minimalist web framework inspired by web.py: WebDotNet is an experiment in web frameworks. Inspired by the python web framework, web.py, it is an exercise in extreme minimalism in a framework...New ReleasesAcies: Acies - Alpha Build 0.0.7: Alpha release. Requires Microsoft XNA Framework Redistributable 3.1 (http://www.microsoft.com/downloads/details.aspx?FamilyID=53867a2a-e249-4560-...AdventureWorksLT 2008 Sample Database Script: AdventureWorksLT 2008 R2 DB Script: This script is based on latest download from the Sample database for SQL Server 2008 R2. The original download from the sample is approx 84 MB and ...ASP.NET Wiki Control: Release 1.3.1: - Removed ASP.NET Session dependency. BreadCrumbs will now work with sessions disabled. Can now also share a URL and have the breadcrumb appropriat...Aviva Solutions C# Coding Guidelines: Visual Studio 2010 Rule Sets: Rule Sets targetting different styles of projects.bvcms - Bellevue Church Management System: Source: This source was used to build the latest {church}.bvcms.comCC.Yacht: CC.Yacht 1.0.10.529: This is the initial release of CC.Yacht. Marked as beta since I don't have any testing/feedback beyond that of myself and my wife.Community Forums NNTP bridge: Community Forums NNTP Bridge V14: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V15: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Coronasoft Cryostasis scripting engine: CCSE v0.0.1.0 BETA: This is the 0.0.1.0 Beta Release. If you find any bugs Post them as a comment or in the Discussions tabEndNote助手: EndNote助手2.1.0..0: 去除了注册验证机制。Fiddler Delayed Responses Extension: v0.1: Version 0.1 of Fiddler Delayed Responses Extension. See ChangeLog for more information.Generic Entity Model 2: GEM2 build 52510: This is first BETA release of GEM2! Following implementation is still missing from initial plan: Detailed documentation MySQL operational databa...JoshDOS: JoshDOS Souce: This is the souce for the JoshDOS 1.0 OS kernel. You need the COSMOS user kit to use.JoshDOS: Shell Version 1.0: Whats in this download *JoshDOS user kit *JoshDOS VStudio starter kit *JoshDOS documentation Note: You will need the COSMOS user kit to start deve...miniTodo: mini Todo version 0.3: Todo完了時に音が出なかったのを修正Model Virtual Casting - ASPItalia.com: Model Virtual Casting 0.2: Model Virtual Casting 0.2Questa seconda release di ModelVC è corrispondente a quella mostrata in occasione della Real Code Conference 4.0 tenutasi ...PhysicsFMUDeluxe.NET: PhysicsFMUDeluxe.NET - Setup: Primeira versão pública do PhysicsFMUDeluxe.NETSilverlight Audio Effect Box: Echo Box 1.0: First realease - zip contains : web page + xap file :Smart Voice: Smart Voice 0.1: Here is the first alpha release of Smart Voice. Please remember this was done for a curricular unit project at my university and i understand that ...StyleCop Contrib: Custom rules v0.2: This release of the custom rules target StyleCop 4.3.3. Included rules are: Spacing Rules - NoTrailingWhiteSpace - IndentUsingTabs Ordering Rules ...System.ComponentModel.DataAnnotations Contrib: 0.2.46280.0: Built with Visual Studio 2010/.Net 4.0. Compiled from source code changeset 46280.VB Styler: VB Styler Suite V 1.3.0.0: This is the newest version of the VB Styler. Here are the new features. New Imaging ColorPicker A template for getting colors via sliders ColorR...VCC: Latest build, v2.1.30529.0: Automatic drop of latest buildWPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.90 RC: Version: 1.0.0.90 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. ...XNA Collision Detection: XNA Collision Detection Sample Program: I have coded a compact program which shows the collision detection working, and provides a camera class for rendering and moving the "player." Agai...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryCommunity Forums NNTP bridgeBlogEngine.NETGMap.NET - Great Maps for Windows Forms & PresentationIonics Isapi Rewrite FilterRawrCustomer Portal Accelerator for Microsoft Dynamics CRMFacebook Developer ToolkitPAP

    Read the article

  • Setup and Use SpecFlow BDD with DevExpress XAF

    - by Patrick Liekhus
    Let’s get started with using the SpecFlow BDD syntax for writing tests with the DevExpress XAF EasyTest scripting syntax.  In order for this to work you will need to download and install the prerequisites listed below.  Once they are installed follow the steps outlined below and enjoy. Prerequisites Install the following items: DevExpress eXpress Application Framework (XAF) found here SpecFlow found here Liekhus BDD/XAF Testing library found here Assumptions I am going to assume at this point that you have created your XAF application and have your Module, Win.Module and Win ready for usage.  You should have also set any attributes and/or settings as you see fit. Setup So where to start. Create a new testing project within your solution. I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e. AlbumManager.FunctionalTests). Add the following references to your project.  It should look like the reference list below. DevExpress.Data.v11.x DevExpress.Persistent.Base.v11.x DevExpress.Persistent.BaseImpl.v11.x DevExpress.Xpo.v11.2 Liekhus.Testing.BDD.Core Liekhus.Testing.BDD.DevExpress TechTalk.SpecFlow TestExecutor.v11.x (found in %Program Files%\DevExpress 2011.x\eXpressApp Framework\Tools\EasyTest Right click the TestExecutor reference and set the “Copy Local” setting to True.  This forces the TestExecutor executable to be available in the bin directory which is where the EasyTest script will be executed further down in the process. Add an Application Configuration File (app.config) to your test application.  You will need to make a few modifications to have SpecFlow generate Microsoft style unit tests.  First add the section handler for SpecFlow and then set your choice of testing framework.  I prefer MS Tests for my projects. Add the EasyTest configuration file to your project.  Add a new XML file and call it Config.xml. Open the properties window for the Config.xml file and set the “Copy to Ouput Directory” to “Copy Always”. You will setup the Config file according to the specifications of the EasyTest library my mapping to your executable and other settings.  You can find the details for the configuration of EasyTest here.  My file looks like this Create a new folder in your test project called “StepDefinitions”.  Add a new SpecFlow Step Definition file item under the StepDefinitions folder.  I typically call this class StepDefinition.cs. Have your step definition inherit from the Liekhus.Testing.BDD.DevExpress.StepDefinition class.  This will give you the default behaviors for your test in the next section. OK.  Now that we have done this series of steps, we will work on simplifying this.  This is an early preview of this new project and is not fully ready for consumption.  If you would like to experiment with it, please feel free.  Our goals are to make this a installable project on it’s own with it’s own project templates and default settings.  This will be coming in later versions.  Currently this project is in Alpha release. Let’s write our first test Remove the basic test that is created for you. We will not use the default test but rather create our own SpecFlow “Feature” files. Add a new item to your project and select the SpecFlow Feature file under C#. Name your feature file as you do your class files after the test they are performing. Writing a feature file uses the Cucumber syntax of Given… When… Then.  Think of it in these terms.  Givens are the pre-conditions for the test.  The Whens are the actual steps for the test being performed.  The Thens are the verification steps that confirm your test either passed or failed.  All of these steps are generated into a an EasyTest format and executed against your XAF project.  You can find more on the Cucumber syntax by using the Secret Ninja Cucumber Scrolls.  This document has several good styles of tests, plus you can get your fill of Chuck Norris vs Ninjas.  Pretty humorous document but full of great content. My first test is going to test the entry of a new Album into the application and is outlined below. The Feature section at the top is more for your documentation purposes.  Try to be descriptive of the test so that it makes sense to the next person behind you.  The Scenario outline is described in the Ninja Scrolls, but think of it as test template.  You can write one test outline and have multiple datasets (Scenarios) executed against that test.  Here are the steps of my test and their descriptions Given I am starting a new test – tells our test to create a new EasyTest file And (Given) the application is open – tells EasyTest to open our application defined in the Config.xml When I am at the “Albums” screen – tells XAF to navigate to the Albums list view And (When) I click the “New:Album” button – tells XAF to click the New Album button on the ribbon And (When) I enter the following information – tells XAF to find the field on the screen and put the value in that field And (When) I click the “Save and Close” button – tells XAF to click the “Save and Close” button on the detail window Then I verify results as “user” – tells the testing framework to execute the EasyTest as your configured user Once you compile and prepare your tests you should see the following in your Test View.  For each of your CreateNewAlbum lines in your scenarios, you will see a new test ready to execute. From here you will use your testing framework of choice to execute the test.  This in turn will execute the EasyTest framework to call back into your XAF application and test your business application. Again, please remember that this is an early preview and we are still working out the details.  Please let us know if you have any comments/questions/concerns. Thanks and happy testing.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >