Search Results

Search found 23792 results on 952 pages for 'void pointers'.

Page 224/952 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • NSTimer won't stop it only resets when invalidated & released

    - by J Fries
    When I press my stop button to stop the timer it just resets to the original time and begins counting down again. I have looked everywhere and all I have found is "invalidate" and it isn't working. I want the time to stop when I hit stop and the label to display the original time. I also turned off automatic counting so I could try releasing and it is giving me an error: 0x10e20a5: movl 16(%edx), %edx EXC_BAD_ACCESS (code=2, address=0x10) `NSTimer *rockettTimer; int rocketCount; @interface FirstViewController () @property (strong, nonatomic) IBOutlet UILabel *rocketTimer; - (IBAction)stopButton:(id)sender; - (IBAction)startButton:(id)sender; @end @implementation FirstViewController @synthesize rocketTimer; -(void) rocketTimerRun{ rocketCount = rocketCount - 1; int minuts = rocketCount / 60; int seconds = rocketCount - (minuts * 60); NSString *timerOutput = [NSString stringWithFormat:@"%d:%.2d", minuts, seconds]; rocketTimer.text = timerOutput; } - (IBAction)startButton:(id)sender { rocketCount = 180; rockettTimer = [NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(rocketTimerRun) userInfo:nil repeats:YES]; - (IBAction)stopButton:(id)sender { [rockettTimer invalidate]; //[rockettTimer release]; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [self setRocketTimer:nil]; [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation: (UIInterfaceOrientation)interfaceOrientation { if ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone) { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } else { return YES; } } @end`

    Read the article

  • Java PropertyChangeListener

    - by Laphroaig
    Hi, i'm trying to figure out how to listen a property change on another class. this is my code: class with the property to listen: public class ClassWithProperty { private PropertyChangeSupport changes = new PropertyChangeSupport(this); private int usersOnline; public int getUsersOnline() { return usersOnline; } public ClassWithProperty() { usersOnline = 0; while (usersOnline<10) { changes.firePropertyChange("usersOnline", usersOnline, usersOnline++); } } public void addPropertyChangeListener( PropertyChangeListener l) { changes.addPropertyChangeListener(l); } public void removePropertyChangeListener( PropertyChangeListener l) { changes.removePropertyChangeListener(l); } } class where i need to know when the property change: public class Main { private static ClassWithProperty test; public static void main(String[] args) { test = new ClassWithProperty(); test.addPropertyChangeListener(listen()); } private static PropertyChangeListener listen() { System.out.println(test.getUsersOnline()); return null; } } I have the event fired only the last time (usersOnline=10). Sorry if it can be a stupid question, i'm learning now java and can't find a solution.

    Read the article

  • Which way of declaring a variable is fastest?

    - by ADB
    For a variable used in a function that is called very often and for implementation in J2ME on a blackberry (if that changed something, can you explain)? class X { int i; public void someFunc(int j) { i = 0; while( i < j ){ [...] i++; } } } or class X { static int i; public void someFunc(int j) { i = 0; while( i < j ){ [...] i++; } } } or class X { public void someFunc(int j) { int i = 0; while( i < j ){ [...] i++; } } } I know there is a difference how a static versus non-static class variable is accessed, but I don't know it would affect the speed. I also remember reading somewhere that in-function variables may be accessed faster, but I don't know why and where I read that. Background on the question: some painting function in games are called excessively often and even small difference in access time can affect the overall performance when a variable is used in a largish loop.

    Read the article

  • C# WPF Unable to control Textboxes

    - by Bo0m3r
    I'm a beginner in coding into C#. While I'm launching a process I can't controls my textboxes. I found some answers on this forum but the explaination is a bit to difficult for me to implement it for my problem. I created a small program that will run a batch file to make a backup. While the backup is running I can't modify my textboxes, disabling buttons etc... I already saw that this is normal but I don't know how to implement the solutions. My last attempt was with Dispatcher.invoke as you can see below. public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); tb_Status.Text = "Ready"; } public void status() { Dispatcher.Invoke(DispatcherPriority.Send, new Action( () => { tb_Status.Text = "The backup is running!"; } ) ); } public void process() { try { Process p = new Process(); p.StartInfo.WindowStyle = ProcessWindowStyle.Minimized; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.FileName = "Robocopy.bat"; p.Start(); string output = p.StandardOutput.ReadToEnd(); p.WaitForExit(); tb_Output.Text = File.ReadAllText("Backup\\log.txt"); } catch (Exception ex) { tb_Status.Text = ex.Message.ToString(); } } private void Bt_Start_Click(object sender, EventArgs e) { status(); Directory.CreateDirectory("Backup"); process(); tb_Status.Text = "The backup finished"; File.Delete("Backup\\log.txt"); } } } Any help is appreciated!

    Read the article

  • Checking if a function has C-linkage at compile-time [unsolvable]

    - by scjohnno
    Is there any way to check if a given function is declared with C-linkage (that is, with extern "C") at compile-time? I am developing a plugin system. Each plugin can supply factory functions to the plugin-loading code. However, this has to be done via name (and subsequent use of GetProcAddress or dlsym). This requires that the functions be declared with C-linkage so as to prevent name-mangling. It would be nice to be able to throw a compiler error if the referred-to function is declared with C++-linkage (as opposed to finding out at runtime when a function with that name does not exist). Here's a simplified example of what I mean: extern "C" void my_func() { } void my_other_func() { } // Replace this struct with one that actually works template<typename T> struct is_c_linkage { static const bool value = true; }; template<typename T> void assertCLinkage(T *func) { static_assert(is_c_linkage<T>::value, "Supplied function does not have C-linkage"); } int main() { assertCLinkage(my_func); // Should compile assertCLinkage(my_other_func); // Should NOT compile } Is there a possible implementation of is_c_linkage that would throw a compiler error for the second function, but not the first? I'm not sure that it's possible (though it may exist as a compiler extension, which I'd still like to know of). Thanks.

    Read the article

  • Generic Event Generator and Handler from User Supplied Types?

    - by JaredBroad
    I'm trying to allow the user to supply custom data and manage the data with custom types. The user's algorithm will get time synchronized events pushed into the event handlers they define. I'm not sure if this is possible but here's the "proof of concept" code I'd like to build. It doesn't detect T in the for loop: "The type or namespace name 'T' could not be found" class Program { static void Main(string[] args) { Algorithm algo = new Algorithm(); Dictionary<Type, string[]> userDataSources = new Dictionary<Type, string[]>(); // "User" adding custom type and data source for algorithm to consume userDataSources.Add(typeof(Weather), new string[] { "temperature data1", "temperature data2" }); for (int i = 0; i < 2; i++) { foreach (Type T in userDataSources.Keys) { string line = userDataSources[typeof(T)][i]; //Iterate over CSV data.. var userObj = new T(line); algo.OnData < typeof(T) > (userObj); } } } //User's algorithm pattern. interface IAlgorithm<TData> where TData : class { void OnData<TData>(TData data); } //User's algorithm. class Algorithm : IAlgorithm<Weather> { //Handle Custom User Data public void OnData<Weather>(Weather data) { Console.WriteLine(data.date.ToString()); Console.ReadKey(); } } //Example "user" custom type. public class Weather { public DateTime date = new DateTime(); public double temperature = 0; public Weather(string line) { Console.WriteLine("Initializing weather object with: " + line); date = DateTime.Now; temperature = -1; } } }

    Read the article

  • Simple iOS Get Request Not Pulling in Data

    - by user2793987
    I have a very simple get request that doesn't return data on my script. The script is fine when viewed in the web browser but the app does not pull in the data. Any other script works with this code in the app. I'm unsure of what to do because there shouldn't be a reason for this to not work. Please let me know if you need more information. Here's the script: https://shipstudent.com/complaint_desk/similarPosts.php?username=noah //retrieve saved username from user defaults (it's noah) NSUserDefaults *eUser = [NSUserDefaults standardUserDefaults]; savedUser = [eUser objectForKey:@"user"]; NSLog(@"%@",savedUser); in ViewDidLoad: NSString *categoryParam = [NSString stringWithFormat:@"https://shipstudent.com/complaint_desk/similarPosts.php?username=%@", savedUser]; NSURL *url = [NSURL URLWithString:categoryParam]; NSMutableURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowed timeoutInterval:30.0]; NSURLConnection *connection = [[NSURLConnection alloc] initWithRequest:request delegate:self]; if (connection==nil) { NSLog(@"Invalid request"); } -(void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { postData = [[NSMutableData alloc]init]; NSLog(@"Response received"); } -(void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [postData appendData:data]; NSLog(@"Data received"); } -(void)connectionDidFinishLoading:(NSURLConnection *)connection { [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; similarPosts = [NSJSONSerialization JSONObjectWithData:postData options:kNilOptions error:nil]; [postsTbl reloadData]; for (id postObject in similarPosts) { NSLog(@"Relatable Post: %@",postObject); } }

    Read the article

  • Using Moq to Validate Separate Invocations with Distinct Arguments

    - by Thermite
    I'm trying to validate the values of arguments passed to subsequent mocked method invocations (of the same method), but cannot figure out a valid approach. A generic example follows: public class Foo { [Dependency] public Bar SomeBar { get; set; } public void SomeMethod() { this.SomeBar.SomeOtherMethod("baz"); this.SomeBar.SomeOtherMethod("bag"); } } public class Bar { public void SomeOtherMethod(string input) { } } public class MoqTest { [TestMethod] public void RunTest() { Mock<Bar> mock = new Mock<Bar>(); Foo f = new Foo(); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("baz"))); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("bag"))); // this of course overrides the first call f.SomeMethod(); mock.VerifyAll(); } } Using a Function in the Setup might be an option, but then it seems I'd be reduced to some sort of global variable to know which argument/iteration I'm verifying. Maybe I'm overlooking the obvious within the Moq framework?

    Read the article

  • QMetaMethod for regular methods missing?

    - by oleks
    Hi, I'm new in QT, and I'm just testing out the MOC. For a given class: class Counter : public QObject { Q_OBJECT int m_value; public: Counter() {m_value = 0;} ~Counter() {} int value() {return m_value;} public slots: void setValue(int value); signals: void valueChanged(int newValue); }; I want to get a list of all methods in a class, but seem to only be getting a list of signals and slots, although the documentation says it should be all methods? Here's my code: #include <QCoreApplication> #include <QObject> #include <QMetaMethod> #include <iostream> using std::cout; using std::endl; int main(int argc, char *argv[]) { QCoreApplication app(argc, argv); const QMetaObject cntmo = Counter::staticMetaObject; for(int i = 0; i != cntmo.methodCount(); ++i) { QMetaMethod qmm(cntmo.method(i)); cout << qmm.signature() << endl; } return app.exec(); } Please beware this is my best c/p, perhaps I forgot to include some headers. My output: destroyed(QObject*) destroyed() deleteLater() _q_reregisterTimers(void*) valueChanged(int) setValue(int) Does anyone know why this is happening? Does qt not recognise int value() {return m_value;} as a valid method? If so, is there a macro I've forgotten or something like that? P.S. I'm using 4.6.2

    Read the article

  • how to start activity of the positive button?

    - by Wisnuardi
    when I click an item in maps, it will appears positive button that reads "Route to". Question, how do I start activity from that positive button? I also use like this, dialog.setPositiveButton("Tampilkan Rute", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int Button) { Intent i = new Intent(this, Rute.class); startActivity(i); } }); to startactivity into Rute class but it always say "remove argument to match intent()" then I don't know what to do. here is my code @Override protected boolean onTap(int index) { OverlayItem item = items.get(0); AlertDialog.Builder dialog = new AlertDialog.Builder(mContext); dialog.setTitle(item.getTitle()); dialog.setMessage(item.getSnippet()); dialog.setPositiveButton("Tampilkan Rute", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int Button) { Intent i = new Intent(this, Rute.class); startActivity(i); } }); dialog.setNegativeButton("Kembali", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int Button) { dialog.cancel(); } }); dialog.show(); return true; } Any suggestions will be greatly appreciated. Thank's i'm sorry if my english is bad :(

    Read the article

  • Android getSelectedItem, how to use?

    - by user1881184
    Im trying use the spinner control result in order to point it to another screen that would be on the app. For example in the spinner control if the user chose chevy it would then take you to another screen which is coded in chevy.xml and Chevy.class. This is what i have thus far and need some help, as our book only used getSelectedItem and the example was only for an output statement. Please help. import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemSelectedListener; import android.widget.Spinner; public class Mainpage extends Activity implements OnItemSelectedListener { String carChoice, chevy, ford, dodge, toyota; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); /* carChoice = group.getSelectedItem().toString(); } if (carChoice.compareTo(chevy)==0) { startActivity(new Intent(Mainpage.this, Chevy.class)); */ } public void onItemSelected(AdapterView<?> arg0, View arg1, int arg2, long arg3) { final Spinner group = (Spinner) findViewById(R.id.carGroup); group.setOnItemSelectedListener(this); // TODO Auto-generated method stub String selected = group.getItemAtPosition(1).toString(); } public void onNothingSelected(AdapterView<?> arg0) { // TODO Auto-generated method stub } }

    Read the article

  • how to get http responses continuously to a java application?

    - by senrulz
    I have the coding shown below where I have sheduled to get a http response from a php web server page. public static void stopPHPDataChecker() { canStop=true; } public static void main (String args[]) { // http request to the php page and get the response PHPDataChecker pdc = new PHPDataChecker(); ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); final ScheduledFuture<?> pdcHandle = scheduler.scheduleAtFixedRate(pdc, 0L, 10L, TimeUnit.MILLISECONDS);// Start schedule scheduler.schedule(new Runnable() { public void run() { System.out.println(">> TRY TO STOP!!!"); pdcHandle.cancel(true); Sheduler.stopPHPDataChecker(); System.out.println("DONE"); } }, 1L, TimeUnit.MILLISECONDS); do { if (canStop) { scheduler.shutdown(); } } while (!canStop); System.out.println("END"); } this coding only returns one response but I want it to get responses continuously so i can do different tasks according to the returned value. how can i do it? Thank you in advance :)

    Read the article

  • Wrong class type in objective C

    - by Max Hui
    I have a parent class and a child class. GameObjectBase (parent) GameObjectPlayer(child). When I override a method in Child class and call it using [myPlayerClass showNextFrame] It is calling the parent class one. It turns out in the debugger, I see the myPlayerClass was indeed class type GameObjectBase (which is the parent class) How come? GameObjectBase.h #import <Foundation/Foundation.h> #import "cocos2d.h" @class GameLayer; @interface GameObjectBase : NSObject { /* CCSprite *gameObjectSprite; // Sprite representing this game object GameLayer *parentGameLayer; */ // Reference of the game layer this object // belongs to } @property (nonatomic, assign) CCSprite *gameObjectSprite; @property (nonatomic, assign) GameLayer *parentGameLayer; // Class method. Autorelease + (id) initWithGameLayer:(GameLayer *) gamelayer imageFileName:(NSString *) fileName; // "Virtual methods" that the derived class should implement. // If not implemented, this method will be called and Assert game - (void) update: (ccTime) dt; - (void) showNextFrame; @end GameObjectPlayer.h #import <Foundation/Foundation.h> #import "GameObjectBase.h" @interface GameObjectPlayer : GameObjectBase { int direction; } @property (nonatomic) int direction; @end GameLayer.h #import "cocos2d.h" #import "GameObjectPlayer.h" @interface GameLayer : CCLayer { } // returns a CCScene that contains the GameLayer as the only child +(CCScene *) scene; @property (nonatomic, strong) GameObjectPlayer *player; @end When I call examine in debugger what type "temp" is in this function inside GameLayer class, it's giving parent class GameObjectBase instead of subclass GameObjectPlayer - (void) update:(ccTime) dt { GameObjectPlayer *temp = _player; [temp showNextFrame]; }

    Read the article

  • C# return and display syntax issue

    - by thatdude
    I am having trouble passing the return value from TheMethod() to Main and displaying the word if the if statement is passed as true. I have thought of two ways of doing this, neither has worked but I think I am missing synatx. Using a return ?; non void method and then displaying the returned value. Using a void method and actually writing out(example below) So yes I am new at this, however I have made so many iterations everything is blending together and I have forgot what I have tried. Any help on the syntax be great for either of these ways. Basically I need it to iterate numbers 1,2,3,4 and depending on if the current iteration matches an expression in the if statements it will display a word. Example: if (3 = i) { Console.WriteLine("Word"); } Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Proj5 { class Program { int i = 0; static void Main(int i) { for (i = 0; i < 101; i++) { Console.WriteLine("test"); } } string TheMethod(int i) { string f = "Word1"; string b = "Word2"; if (i == 3) { return f; } if (i == 5) { return b; } if (0 == (i % 3)) { return f; } if (0 == i % 5) { return b; } else { return b; } } } }

    Read the article

  • how get attribute relation from another entity class Java Persistance API and show to JSP through servlet?

    - by user1787209
    I have 2 entities are entities meeting and meetingAgenda. I write code entity class (EJB) from database like this. public class Meeting implements Serializable { ...... @XmlTransient public Collection<MeetingAgenda> getMeetingAgendaCollection() { return meetingAgendaCollection; } public void setMeetingAgendaCollection(Collection<MeetingAgenda> meetingAgendaCollection) { this.meetingAgendaCollection = meetingAgendaCollection; } ....... } and entity class meeting agenda like this. ..... public class MeetingAgenda implements Serializable { .... public String getAgenda() { return agenda; } public void setAgenda(String agenda) { this.agenda = agenda; } .... } method getMeetingAgendaCollection is a relation from meeting entity . then, in my controller servlet i call EJB like this. public class ControllerServlet extends HttpServlet { @EJB private RapatFacadeLocal rapatFacade; public void init() throws ServletException { // store category list in servlet context getServletContext().setAttribute("meetings", rapatFacade.findAll()); } ...... i want to show data from table entities meeting and meetingAgenda...but i can't.. please help.. i write code in JSP page.. like this.. <c:forEach var="meeting" items="${meetings}"> <td> MeetingCode : ${meeting.meetingCode} </td> <td> Meeting : ${meeting.meeting} </td> <td> Agenda : ${meeting.getMeetingAgendaCollection} </td> </c:forEach> how do I display data Agenda using getMeetingAgendaCollection ???? thanks for your help.

    Read the article

  • compile error:c language in telnet(linux)

    - by lilyrose07
    #include<stdio.h> #include<unistd.h> #include<stdlib.h> #include<pthread.h> int count=0; void *thread_function(void *arg) { while(count<10) { if(count%2==1) { count++; } else {sleep(1);} } } int main(int argc,int *argv) { int res; pthread_t a_thread[2]; void *thread_result; int n; while(count<10) { if(count%2==0) {printf("%d",count); count++; } else{sleep(1);} } for(n=0;n<2;n++) { pthread_create(&(a_thread[n]),NULL,thread_function,NULL); } while(count==9) {pthread_join(a_thread[0],&thread_result); } while(count==10) { pthread_join(a_thread[1],&thread_result); } printf("%d",count); return 0; } in telnet,linux i write gcc za.c error list: undefined reference to pthread_create,pthread_join in function 'main' //why??

    Read the article

  • How can I communicate with an Object created in another JFrame?

    - by user3093422
    so my program basically consists of two frames. As I click a button on Frame1, Frame2 pops up, and when I click a button on Frame2, and Object is created and the window closes. Now, I need to be able to use the methods of Object in my Frame1, how can this be achieved? I am kind of new to Object-Oriented Programming, sorry, but it's hard to me to explain the situation. Thanks! I will try to put a random code for pure example below. JFrame 1: public class JFrame1 extends JFrame{ variables.. public JFrame1(){ GUIcomponents.... } public static void main(String[] args) { JFrame1 aplicacion = new JFrame1(); aplicacion.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } private class ActList implements ActionListener { public void actionPerformed(ActionEvent event) { new JFrame2(); } } } JFrame 2: public class JFrame2 extends JFrame{ variables.. public JFrame2(){ GUIcomponents.... } private class ActList implements ActionListener { public void actionPerformed(ActionEvent event) { Object object = new Object(); setVisible(false); } } } Sorry if it's messy, I made it in the moment. So yeah, basically I want to JFrame1 to be able to use the getters and settes from Object, which was created in JFrame2. What should I do? Once again, thanks!

    Read the article

  • Camera for 2.5D Game

    - by me--
    I'm hoping someone can explain this to me like I'm 5, because I've been struggling with this for hours and simply cannot understand what I'm doing wrong. I've written a Camera class for my 2.5D game. The intention is to support world and screen spaces like this: The camera is the black thing on the right. The +Z axis is upwards in that image, with -Z heading downwards. As you can see, both world space and screen space have (0, 0) at their top-left. I started writing some unit tests to prove that my camera was working as expected, and that's where things started getting...strange. My tests plot coordinates in world, view, and screen spaces. Eventually I will use image comparison to assert that they are correct, but for now my test just displays the result. The render logic uses Camera.ViewMatrix to transform world space to view space, and Camera.WorldPointToScreen to transform world space to screen space. Here is an example test: [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render(camera, out worldRender, out viewRender, out screenRender, new Vector3(30, 0, 0), new Vector3(30, 40, 0)); this.ShowRenders(camera, worldRender, viewRender, screenRender); } And here's what pops up when I run this test: World space looks OK, although I suspect the z axis is going into the screen instead of towards the viewer. View space has me completely baffled. I was expecting the camera to be sitting above (0, 0) and looking towards the center of the scene. Instead, the z axis seems to be the wrong way around, and the camera is positioned in the opposite corner to what I expect! I suspect screen space will be another thing altogether, but can anyone explain what I'm doing wrong in my Camera class? UPDATE I made some progress in terms of getting things to look visually as I expect, but only through intuition: not an actual understanding of what I'm doing. Any enlightenment would be greatly appreciated. I realized that my view space was flipped both vertically and horizontally compared to what I expected, so I changed my view matrix to scale accordingly: this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom, this.zoom, 1) * Matrix.CreateScale(-1, -1, 1); I could combine the two CreateScale calls, but have left them separate for clarity. Again, I have no idea why this is necessary, but it fixed my view space: But now my screen space needs to be flipped vertically, so I modified my projection matrix accordingly: this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); And this results in what I was expecting from my first attempt: I have also just tried using Camera to render sprites via a SpriteBatch to make sure everything works there too, and it does. But the question remains: why do I need to do all this flipping of axes to get the space coordinates the way I expect? UPDATE 2 I've since improved my rendering logic in my test suite so that it supports geometries and so that lines get lighter the further away they are from the camera. I wanted to do this to avoid optical illusions and to further prove to myself that I'm looking at what I think I am. Here is an example: In this case, I have 3 geometries: a cube, a sphere, and a polyline on the top face of the cube. Notice how the darkening and lightening of the lines correctly identifies those portions of the geometries closer to the camera. If I remove the negative scaling I had to put in, I see: So you can see I'm still in the same boat - I still need those vertical and horizontal flips in my matrices to get things to appear correctly. In the interests of giving people a repro to play with, here is the complete code needed to generate the above. If you want to run via the test harness, just install the xunit package: Camera.cs: using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using System.Diagnostics; public sealed class Camera { private readonly Viewport viewport; private readonly Matrix projectionMatrix; private Matrix? viewMatrix; private Vector3 location; private Vector3 target; private Vector3 up; private float zoom; public Camera(Viewport viewport) { this.viewport = viewport; // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); // defaults this.location = new Vector3(this.viewport.Width / 2, this.viewport.Height, 100); this.target = new Vector3(this.viewport.Width / 2, this.viewport.Height / 2, 0); this.up = new Vector3(0, 0, 1); this.zoom = 1; } public Viewport Viewport { get { return this.viewport; } } public Vector3 Location { get { return this.location; } set { this.location = value; this.viewMatrix = null; } } public Vector3 Target { get { return this.target; } set { this.target = value; this.viewMatrix = null; } } public Vector3 Up { get { return this.up; } set { this.up = value; this.viewMatrix = null; } } public float Zoom { get { return this.zoom; } set { this.zoom = value; this.viewMatrix = null; } } public Matrix ProjectionMatrix { get { return this.projectionMatrix; } } public Matrix ViewMatrix { get { if (this.viewMatrix == null) { // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom) * Matrix.CreateScale(-1, -1, 1); } return this.viewMatrix.Value; } } public Vector2 WorldPointToScreen(Vector3 point) { var result = viewport.Project(point, this.ProjectionMatrix, this.ViewMatrix, Matrix.Identity); return new Vector2(result.X, result.Y); } public void WorldPointsToScreen(Vector3[] points, Vector2[] destination) { Debug.Assert(points != null); Debug.Assert(destination != null); Debug.Assert(points.Length == destination.Length); for (var i = 0; i < points.Length; ++i) { destination[i] = this.WorldPointToScreen(points[i]); } } } CameraFixture.cs: using Microsoft.Xna.Framework.Graphics; using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Media; using Xunit; using XNA = Microsoft.Xna.Framework; public sealed class CameraFixture { [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render( camera, out worldRender, out viewRender, out screenRender, new Sphere(30, 15) { WorldMatrix = XNA.Matrix.CreateTranslation(155, 50, 0) }, new Cube(30) { WorldMatrix = XNA.Matrix.CreateTranslation(75, 60, 15) }, new PolyLine(new XNA.Vector3(0, 0, 0), new XNA.Vector3(10, 10, 0), new XNA.Vector3(20, 0, 0), new XNA.Vector3(0, 0, 0)) { WorldMatrix = XNA.Matrix.CreateTranslation(65, 55, 30) }); this.ShowRenders(worldRender, viewRender, screenRender); } #region Supporting Fields private static readonly Pen xAxisPen = new Pen(Brushes.Red, 2); private static readonly Pen yAxisPen = new Pen(Brushes.Green, 2); private static readonly Pen zAxisPen = new Pen(Brushes.Blue, 2); private static readonly Pen viewportPen = new Pen(Brushes.Gray, 1); private static readonly Pen nonScreenSpacePen = new Pen(Brushes.Black, 0.5); private static readonly Color geometryBaseColor = Colors.Black; #endregion #region Supporting Methods private void Render(Camera camera, out DrawingVisual worldRender, out DrawingVisual viewRender, out DrawingVisual screenRender, params Geometry[] geometries) { var worldDrawingVisual = new DrawingVisual(); var viewDrawingVisual = new DrawingVisual(); var screenDrawingVisual = new DrawingVisual(); const int axisLength = 15; using (var worldDrawingContext = worldDrawingVisual.RenderOpen()) using (var viewDrawingContext = viewDrawingVisual.RenderOpen()) using (var screenDrawingContext = screenDrawingVisual.RenderOpen()) { // draw lines around the camera's viewport var viewportBounds = camera.Viewport.Bounds; var viewportLines = new Tuple<int, int, int, int>[] { Tuple.Create(viewportBounds.Left, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Top), Tuple.Create(viewportBounds.Left, viewportBounds.Top, viewportBounds.Right, viewportBounds.Top), Tuple.Create(viewportBounds.Right, viewportBounds.Top, viewportBounds.Right, viewportBounds.Bottom), Tuple.Create(viewportBounds.Right, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Bottom) }; foreach (var viewportLine in viewportLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0)); worldDrawingContext.DrawLine(viewportPen, new Point(viewportLine.Item1, viewportLine.Item2), new Point(viewportLine.Item3, viewportLine.Item4)); viewDrawingContext.DrawLine(viewportPen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(viewportPen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // draw axes var axisLines = new Tuple<int, int, int, int, int, int, Pen>[] { Tuple.Create(0, 0, 0, axisLength, 0, 0, xAxisPen), Tuple.Create(0, 0, 0, 0, axisLength, 0, yAxisPen), Tuple.Create(0, 0, 0, 0, 0, axisLength, zAxisPen) }; foreach (var axisLine in axisLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6)); worldDrawingContext.DrawLine(axisLine.Item7, new Point(axisLine.Item1, axisLine.Item2), new Point(axisLine.Item4, axisLine.Item5)); viewDrawingContext.DrawLine(axisLine.Item7, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(axisLine.Item7, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // for all points in all geometries to be rendered, find the closest and furthest away from the camera so we can lighten lines that are further away var distancesToAllGeometrySections = from geometry in geometries let geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix from section in geometry.Sections from point in new XNA.Vector3[] { section.Item1, section.Item2 } let viewPoint = XNA.Vector3.Transform(point, geometryViewMatrix) select viewPoint.Length(); var furthestDistance = distancesToAllGeometrySections.Max(); var closestDistance = distancesToAllGeometrySections.Min(); var deltaDistance = Math.Max(0.000001f, furthestDistance - closestDistance); // draw each geometry for (var i = 0; i < geometries.Length; ++i) { var geometry = geometries[i]; // there's probably a more correct name for this, but basically this gets the geometry relative to the camera so we can check how far away each point is from the camera var geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix; // we order roughly by those sections furthest from the camera to those closest, so that the closer ones "overwrite" the ones further away var orderedSections = from section in geometry.Sections let startPointRelativeToCamera = XNA.Vector3.Transform(section.Item1, geometryViewMatrix) let endPointRelativeToCamera = XNA.Vector3.Transform(section.Item2, geometryViewMatrix) let startPointDistance = startPointRelativeToCamera.Length() let endPointDistance = endPointRelativeToCamera.Length() orderby (startPointDistance + endPointDistance) descending select new { Section = section, DistanceToStart = startPointDistance, DistanceToEnd = endPointDistance }; foreach (var orderedSection in orderedSections) { var start = XNA.Vector3.Transform(orderedSection.Section.Item1, geometry.WorldMatrix); var end = XNA.Vector3.Transform(orderedSection.Section.Item2, geometry.WorldMatrix); var viewStart = XNA.Vector3.Transform(start, camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(end, camera.ViewMatrix); worldDrawingContext.DrawLine(nonScreenSpacePen, new Point(start.X, start.Y), new Point(end.X, end.Y)); viewDrawingContext.DrawLine(nonScreenSpacePen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); // screen rendering is more complicated purely because I wanted geometry to fade the further away it is from the camera // otherwise, it's very hard to tell whether the rendering is actually correct or not var startDistanceRatio = (orderedSection.DistanceToStart - closestDistance) / deltaDistance; var endDistanceRatio = (orderedSection.DistanceToEnd - closestDistance) / deltaDistance; // lerp towards white based on distance from camera, but only to a maximum of 90% var startColor = Lerp(geometryBaseColor, Colors.White, startDistanceRatio * 0.9f); var endColor = Lerp(geometryBaseColor, Colors.White, endDistanceRatio * 0.9f); var screenStart = camera.WorldPointToScreen(start); var screenEnd = camera.WorldPointToScreen(end); var brush = new LinearGradientBrush { StartPoint = new Point(screenStart.X, screenStart.Y), EndPoint = new Point(screenEnd.X, screenEnd.Y), MappingMode = BrushMappingMode.Absolute }; brush.GradientStops.Add(new GradientStop(startColor, 0)); brush.GradientStops.Add(new GradientStop(endColor, 1)); var pen = new Pen(brush, 1); brush.Freeze(); pen.Freeze(); screenDrawingContext.DrawLine(pen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } } } worldRender = worldDrawingVisual; viewRender = viewDrawingVisual; screenRender = screenDrawingVisual; } private static float Lerp(float start, float end, float amount) { var difference = end - start; var adjusted = difference * amount; return start + adjusted; } private static Color Lerp(Color color, Color to, float amount) { var sr = color.R; var sg = color.G; var sb = color.B; var er = to.R; var eg = to.G; var eb = to.B; var r = (byte)Lerp(sr, er, amount); var g = (byte)Lerp(sg, eg, amount); var b = (byte)Lerp(sb, eb, amount); return Color.FromArgb(255, r, g, b); } private void ShowRenders(DrawingVisual worldRender, DrawingVisual viewRender, DrawingVisual screenRender) { var itemsControl = new ItemsControl(); itemsControl.Items.Add(new HeaderedContentControl { Header = "World", Content = new DrawingVisualHost(worldRender)}); itemsControl.Items.Add(new HeaderedContentControl { Header = "View", Content = new DrawingVisualHost(viewRender) }); itemsControl.Items.Add(new HeaderedContentControl { Header = "Screen", Content = new DrawingVisualHost(screenRender) }); var window = new Window { Title = "Renders", Content = itemsControl, ShowInTaskbar = true, SizeToContent = SizeToContent.WidthAndHeight }; window.ShowDialog(); } #endregion #region Supporting Types // stupidly simple 3D geometry class, consisting of a series of sections that will be connected by lines private abstract class Geometry { public abstract IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get; } public XNA.Matrix WorldMatrix { get; set; } } private sealed class Line : Geometry { private readonly XNA.Vector3 magnitude; public Line(XNA.Vector3 magnitude) { this.magnitude = magnitude; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { yield return Tuple.Create(XNA.Vector3.Zero, this.magnitude); } } } private sealed class PolyLine : Geometry { private readonly XNA.Vector3[] points; public PolyLine(params XNA.Vector3[] points) { this.points = points; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { if (this.points.Length < 2) { yield break; } var end = this.points[0]; for (var i = 1; i < this.points.Length; ++i) { var start = end; end = this.points[i]; yield return Tuple.Create(start, end); } } } } private sealed class Cube : Geometry { private readonly float size; public Cube(float size) { this.size = size; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var halfSize = this.size / 2; var frontBottomLeft = new XNA.Vector3(-halfSize, halfSize, -halfSize); var frontBottomRight = new XNA.Vector3(halfSize, halfSize, -halfSize); var frontTopLeft = new XNA.Vector3(-halfSize, halfSize, halfSize); var frontTopRight = new XNA.Vector3(halfSize, halfSize, halfSize); var backBottomLeft = new XNA.Vector3(-halfSize, -halfSize, -halfSize); var backBottomRight = new XNA.Vector3(halfSize, -halfSize, -halfSize); var backTopLeft = new XNA.Vector3(-halfSize, -halfSize, halfSize); var backTopRight = new XNA.Vector3(halfSize, -halfSize, halfSize); // front face yield return Tuple.Create(frontBottomLeft, frontBottomRight); yield return Tuple.Create(frontBottomLeft, frontTopLeft); yield return Tuple.Create(frontTopLeft, frontTopRight); yield return Tuple.Create(frontTopRight, frontBottomRight); // left face yield return Tuple.Create(frontTopLeft, backTopLeft); yield return Tuple.Create(backTopLeft, backBottomLeft); yield return Tuple.Create(backBottomLeft, frontBottomLeft); // right face yield return Tuple.Create(frontTopRight, backTopRight); yield return Tuple.Create(backTopRight, backBottomRight); yield return Tuple.Create(backBottomRight, frontBottomRight); // back face yield return Tuple.Create(backBottomLeft, backBottomRight); yield return Tuple.Create(backTopLeft, backTopRight); } } } private sealed class Sphere : Geometry { private readonly float radius; private readonly int subsections; public Sphere(float radius, int subsections) { this.radius = radius; this.subsections = subsections; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var latitudeLines = this.subsections; var longitudeLines = this.subsections; // see http://stackoverflow.com/a/4082020/5380 var results = from latitudeLine in Enumerable.Range(0, latitudeLines) from longitudeLine in Enumerable.Range(0, longitudeLines) let latitudeRatio = latitudeLine / (float)latitudeLines let longitudeRatio = longitudeLine / (float)longitudeLines let nextLatitudeRatio = (latitudeLine + 1) / (float)latitudeLines let nextLongitudeRatio = (longitudeLine + 1) / (float)longitudeLines let z1 = Math.Cos(Math.PI * latitudeRatio) let z2 = Math.Cos(Math.PI * nextLatitudeRatio) let x1 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y1 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x3 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * nextLongitudeRatio) let y3 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * nextLongitudeRatio) let start = new XNA.Vector3((float)x1 * radius, (float)y1 * radius, (float)z1 * radius) let firstEnd = new XNA.Vector3((float)x2 * radius, (float)y2 * radius, (float)z2 * radius) let secondEnd = new XNA.Vector3((float)x3 * radius, (float)y3 * radius, (float)z1 * radius) select new { First = Tuple.Create(start, firstEnd), Second = Tuple.Create(start, secondEnd) }; foreach (var result in results) { yield return result.First; yield return result.Second; } } } } #endregion }

    Read the article

  • Entity Framework Code-First, OData & Windows Phone Client

    - by Jon Galloway
    Entity Framework Code-First is the coolest thing since sliced bread, Windows  Phone is the hottest thing since Tickle-Me-Elmo and OData is just too great to ignore. As part of the Full Stack project, we wanted to put them together, which turns out to be pretty easy… once you know how.   EF Code-First CTP5 is available now and there should be very few breaking changes in the release edition, which is due early in 2011.  Note: EF Code-First evolved rapidly and many of the existing documents and blog posts which were written with earlier versions, may now be obsolete or at least misleading.   Code-First? With traditional Entity Framework you start with a database and from that you generate “entities” – classes that bridge between the relational database and your object oriented program. With Code-First (Magic-Unicorn) (see Hanselman’s write up and this later write up by Scott Guthrie) the Entity Framework looks at classes you created and says “if I had created these classes, the database would have to have looked like this…” and creates the database for you! By deriving your entity collections from DbSet and exposing them via a class that derives from DbContext, you "turn on" database backing for your POCO with a minimum of code and no hidden designer or configuration files. POCO == Plain Old CLR Objects Your entity objects can be used throughout your applications - in web applications, console applications, Silverlight and Windows Phone applications, etc. In our case, we'll want to read and update data from a Windows Phone client application, so we'll expose the entities through a DataService and hook the Windows Phone client application to that data via proxies.  Piece of Pie.  Easy as cake. The Demo Architecture To see this at work, we’ll create an ASP.NET/MVC application which will act as the host for our Data Service.  We’ll create an incredibly simple data layer using EF Code-First on top of SQLCE4 and we’ll expose the data in a WCF Data Service using the oData protocol.  Our Windows Phone 7 client will instantiate  the data context via a URI and load the data asynchronously. Setting up the Server project with MVC 3, EF Code First, and SQL CE 4 Create a new application of type ASP.NET MVC 3 and name it DeadSimpleServer.  We need to add the latest SQLCE4 and Entity Framework Code First CTP's to our project. Fortunately, NuGet makes that really easy. Open the Package Manager Console (View / Other Windows / Package Manager Console) and type in "Install-Package EFCodeFirst.SqlServerCompact" at the PM> command prompt. Since NuGet handles dependencies for you, you'll see that it installs everything you need to use Entity Framework Code First in your project. PM> install-package EFCodeFirst.SqlServerCompact 'SQLCE (= 4.0.8435.1)' not installed. Attempting to retrieve dependency from source... Done 'EFCodeFirst (= 0.8)' not installed. Attempting to retrieve dependency from source... Done 'WebActivator (= 1.0.0.0)' not installed. Attempting to retrieve dependency from source... Done You are downloading SQLCE from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'SQLCE 4.0.8435.1' You are downloading EFCodeFirst from Microsoft, the license agreement to which is available at http://go.microsoft.com/fwlink/?LinkID=206497. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst 0.8' Successfully installed 'WebActivator 1.0.0.0' You are downloading EFCodeFirst.SqlServerCompact from Microsoft, the license agreement to which is available at http://173.203.67.148/licenses/SQLCE/EULA_ENU.rtf. Check the package for additional dependencies, which may come with their own license agreement(s). Your use of the package and dependencies constitutes your acceptance of their license agreements. If you do not accept the license agreement(s), then delete the relevant components from your device. Successfully installed 'EFCodeFirst.SqlServerCompact 0.8' Successfully added 'SQLCE 4.0.8435.1' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst 0.8' to EfCodeFirst-CTP5 Successfully added 'WebActivator 1.0.0.0' to EfCodeFirst-CTP5 Successfully added 'EFCodeFirst.SqlServerCompact 0.8' to EfCodeFirst-CTP5 Note: We're using SQLCE 4 with Entity Framework here because they work really well together from a development scenario, but you can of course use Entity Framework Code First with other databases supported by Entity framework. Creating The Model using EF Code First Now we can create our model class. Right-click the Models folder and select Add/Class. Name the Class Person.cs and add the following code: using System.Data.Entity; namespace DeadSimpleServer.Models { public class Person { public int ID { get; set; } public string Name { get; set; } } public class PersonContext : DbContext { public DbSet<Person> People { get; set; } } } Notice that the entity class Person has no special interfaces or base class. There's nothing special needed to make it work - it's just a POCO. The context we'll use to access the entities in the application is called PersonContext, but you could name it anything you wanted. The important thing is that it inherits DbContext and contains one or more DbSet which holds our entity collections. Adding Seed Data We need some testing data to expose from our service. The simplest way to get that into our database is to modify the CreateCeDatabaseIfNotExists class in AppStart_SQLCEEntityFramework.cs by adding some seed data to the Seed method: protected virtual void Seed( TContext context ) { var personContext = context as PersonContext; personContext.People.Add( new Person { ID = 1, Name = "George Washington" } ); personContext.People.Add( new Person { ID = 2, Name = "John Adams" } ); personContext.People.Add( new Person { ID = 3, Name = "Thomas Jefferson" } ); personContext.SaveChanges(); } The CreateCeDatabaseIfNotExists class name is pretty self-explanatory - when our DbContext is accessed and the database isn't found, a new one will be created and populated with the data in the Seed method. There's one more step to make that work - we need to uncomment a line in the Start method at the top of of the AppStart_SQLCEEntityFramework class and set the context name, as shown here, public static class AppStart_SQLCEEntityFramework { public static void Start() { DbDatabase.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0"); // Sets the default database initialization code for working with Sql Server Compact databases // Uncomment this line and replace CONTEXT_NAME with the name of your DbContext if you are // using your DbContext to create and manage your database DbDatabase.SetInitializer(new CreateCeDatabaseIfNotExists<PersonContext>()); } } Now our database and entity framework are set up, so we can expose data via WCF Data Services. Note: This is a bare-bones implementation with no administration screens. If you'd like to see how those are added, check out The Full Stack screencast series. Creating the oData Service using WCF Data Services Add a new WCF Data Service to the project (right-click the project / Add New Item / Web / WCF Data Service). We’ll be exposing all the data as read/write.  Remember to reconfigure to control and minimize access as appropriate for your own application. Open the code behind for your service. In our case, the service was called PersonTestDataService.svc so the code behind class file is PersonTestDataService.svc.cs. using System.Data.Services; using System.Data.Services.Common; using System.ServiceModel; using DeadSimpleServer.Models; namespace DeadSimpleServer { [ServiceBehavior( IncludeExceptionDetailInFaults = true )] public class PersonTestDataService : DataService<PersonContext> { // This method is called only once to initialize service-wide policies. public static void InitializeService( DataServiceConfiguration config ) { config.SetEntitySetAccessRule( "*", EntitySetRights.All ); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; config.UseVerboseErrors = true; } } } We're enabling a few additional settings to make it easier to debug if you run into trouble. The ServiceBehavior attribute is set to include exception details in faults, and we're using verbose errors. You can remove both of these when your service is working, as your public production service shouldn't be revealing exception information. You can view the output of the service by running the application and browsing to http://localhost:[portnumber]/PersonTestDataService.svc/: <service xml:base="http://localhost:49786/PersonTestDataService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> <workspace> <atom:title>Default</atom:title> <collection href="People"> <atom:title>People</atom:title> </collection> </workspace> </service> This indicates that the service exposes one collection, which is accessible by browsing to http://localhost:[portnumber]/PersonTestDataService.svc/People <?xml version="1.0" encoding="iso-8859-1" standalone="yes"?> <feed xml:base=http://localhost:49786/PersonTestDataService.svc/ xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns="http://www.w3.org/2005/Atom"> <title type="text">People</title> <id>http://localhost:49786/PersonTestDataService.svc/People</id> <updated>2010-12-29T01:01:50Z</updated> <link rel="self" title="People" href="People" /> <entry> <id>http://localhost:49786/PersonTestDataService.svc/People(1)</id> <title type="text"></title> <updated>2010-12-29T01:01:50Z</updated> <author> <name /> </author> <link rel="edit" title="Person" href="People(1)" /> <category term="DeadSimpleServer.Models.Person" scheme="http://schemas.microsoft.com/ado/2007/08/dataservices/scheme" /> <content type="application/xml"> <m:properties> <d:ID m:type="Edm.Int32">1</d:ID> <d:Name>George Washington</d:Name> </m:properties> </content> </entry> <entry> ... </entry> </feed> Let's recap what we've done so far. But enough with services and XML - let's get this into our Windows Phone client application. Creating the DataServiceContext for the Client Use the latest DataSvcUtil.exe from http://odata.codeplex.com. As of today, that's in this download: http://odata.codeplex.com/releases/view/54698 You need to run it with a few options: /uri - This will point to the service URI. In this case, it's http://localhost:59342/PersonTestDataService.svc  Pick up the port number from your running server (e.g., the server formerly known as Cassini). /out - This is the DataServiceContext class that will be generated. You can name it whatever you'd like. /Version - should be set to 2.0 /DataServiceCollection - Include this flag to generate collections derived from the DataServiceCollection base, which brings in all the ObservableCollection goodness that handles your INotifyPropertyChanged events for you. Here's the console session from when we ran it: <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> Next, to keep things simple, change the Binding on the two TextBlocks within the DataTemplate to Name and ID, <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" SelectionChanged="MainListBox_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Margin="0,0,0,17" Width="432"> <TextBlock Text="{Binding Name}" TextWrapping="Wrap" Style="{StaticResource PhoneTextExtraLargeStyle}" /> <TextBlock Text="{Binding ID}" TextWrapping="Wrap" Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Getting The Context In the code-behind you’ll first declare a member variable to hold the context from the Entity Framework. This is named using convention over configuration. The db type is Person and the context is of type PersonContext, You initialize it by providing the URI, in this case using the URL obtained from the Cassini web server, PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); Create a second member variable of type DataServiceCollection<Person> but do not initialize it, DataServiceCollection<Person> people; In the constructor you’ll initialize the DataServiceCollection using the PersonContext, public MainPage() { InitializeComponent(); people = new DataServiceCollection<Person>( context ); Finally, you’ll load the people collection using the LoadAsync method, passing in the fully specified URI for the People collection in the web service, people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); Note that this method runs asynchronously and when it is finished the people  collection is already populated. Thus, since we didn’t need or want to override any of the behavior we don’t implement the LoadCompleted. You can use the LoadCompleted event if you need to do any other UI updates, but you don't need to. The final code is as shown below: using System; using System.Data.Services.Client; using System.Windows; using System.Windows.Controls; using DeadSimpleServer.Models; using Microsoft.Phone.Controls; namespace WindowsPhoneODataTest { public partial class MainPage : PhoneApplicationPage { PersonContext context = new PersonContext( new Uri( "http://localhost:49786/PersonTestDataService.svc/" ) ); DataServiceCollection<Person> people; // Constructor public MainPage() { InitializeComponent(); // Set the data context of the listbox control to the sample data // DataContext = App.ViewModel; people = new DataServiceCollection<Person>( context ); people.LoadAsync( new Uri( "http://localhost:49786/PersonTestDataService.svc/People" ) ); DataContext = people; this.Loaded += new RoutedEventHandler( MainPage_Loaded ); } // Handle selection changed on ListBox private void MainListBox_SelectionChanged( object sender, SelectionChangedEventArgs e ) { // If selected index is -1 (no selection) do nothing if ( MainListBox.SelectedIndex == -1 ) return; // Navigate to the new page NavigationService.Navigate( new Uri( "/DetailsPage.xaml?selectedItem=" + MainListBox.SelectedIndex, UriKind.Relative ) ); // Reset selected index to -1 (no selection) MainListBox.SelectedIndex = -1; } // Load data for the ViewModel Items private void MainPage_Loaded( object sender, RoutedEventArgs e ) { if ( !App.ViewModel.IsDataLoaded ) { App.ViewModel.LoadData(); } } } } With people populated we can set it as the DataContext and run the application; you’ll find that the Name and ID are displayed in the list on the Mainpage. Here's how the pieces in the client fit together: Complete source code available here

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • NoSQL with RavenDB and ASP.NET MVC - Part 2

    - by shiju
    In my previous post, we have discussed on how to work with RavenDB document database in an ASP.NET MVC application. We have setup RavenDB for our ASP.NET MVC application and did basic CRUD operations against a simple domain entity. In this post, let’s discuss on domain entity with deep object graph and how to query against RavenDB documents using Indexes.Let's create two domain entities for our demo ASP.NET MVC appplication  public class Category {       public string Id { get; set; }     [Required(ErrorMessage = "Name Required")]     [StringLength(25, ErrorMessage = "Must be less than 25 characters")]     public string Name { get; set;}     public string Description { get; set; }     public List<Expense> Expenses { get; set; }       public Category()     {         Expenses = new List<Expense>();     } }    public class Expense {       public string Id { get; set; }     public Category Category { get; set; }     public string  Transaction { get; set; }     public DateTime Date { get; set; }     public double Amount { get; set; }   }  We have two domain entities - Category and Expense. A single category contains a list of expense transactions and every expense transaction should have a Category.Let's create  ASP.NET MVC view model  for Expense transaction public class ExpenseViewModel {     public string Id { get; set; }       public string CategoryId { get; set; }       [Required(ErrorMessage = "Transaction Required")]            public string Transaction { get; set; }       [Required(ErrorMessage = "Date Required")]            public DateTime Date { get; set; }       [Required(ErrorMessage = "Amount Required")]     public double Amount { get; set; }       public IEnumerable<SelectListItem> Category { get; set; } } Let's create a contract type for Expense Repository  public interface IExpenseRepository {     Expense Load(string id);     IEnumerable<Expense> GetExpenseTransactions(DateTime startDate,DateTime endDate);     void Save(Expense expense,string categoryId);     void Delete(string id);  } Let's create a concrete type for Expense Repository for handling CRUD operations. public class ExpenseRepository : IExpenseRepository {   private IDocumentSession session; public ExpenseRepository() {         session = MvcApplication.CurrentSession; } public Expense Load(string id) {     return session.Load<Expense>(id); } public IEnumerable<Expense> GetExpenseTransactions(DateTime startDate, DateTime endDate) {             //Querying using the Index name "ExpenseTransactions"     //filtering with dates     var expenses = session.LuceneQuery<Expense>("ExpenseTransactions")         .WaitForNonStaleResults()         .Where(exp => exp.Date >= startDate && exp.Date <= endDate)         .ToArray();     return expenses; } public void Save(Expense expense,string categoryId) {     var category = session.Load<Category>(categoryId);     if (string.IsNullOrEmpty(expense.Id))     {         //new expense transaction         expense.Category = category;         session.Store(expense);     }     else     {         //modifying an existing expense transaction         var expenseToEdit = Load(expense.Id);         //Copy values to  expenseToEdit         ModelCopier.CopyModel(expense, expenseToEdit);         //set category object         expenseToEdit.Category = category;       }     //save changes     session.SaveChanges(); } public void Delete(string id) {     var expense = Load(id);     session.Delete<Expense>(expense);     session.SaveChanges(); }   }  Insert/Update Expense Transaction The Save method is used for both insert a new expense record and modifying an existing expense transaction. For a new expense transaction, we store the expense object with associated category into document session object and load the existing expense object and assign values to it for editing a existing record.  public void Save(Expense expense,string categoryId) {     var category = session.Load<Category>(categoryId);     if (string.IsNullOrEmpty(expense.Id))     {         //new expense transaction         expense.Category = category;         session.Store(expense);     }     else     {         //modifying an existing expense transaction         var expenseToEdit = Load(expense.Id);         //Copy values to  expenseToEdit         ModelCopier.CopyModel(expense, expenseToEdit);         //set category object         expenseToEdit.Category = category;       }     //save changes     session.SaveChanges(); } Querying Expense transactions   public IEnumerable<Expense> GetExpenseTransactions(DateTime startDate, DateTime endDate) {             //Querying using the Index name "ExpenseTransactions"     //filtering with dates     var expenses = session.LuceneQuery<Expense>("ExpenseTransactions")         .WaitForNonStaleResults()         .Where(exp => exp.Date >= startDate && exp.Date <= endDate)         .ToArray();     return expenses; }  The GetExpenseTransactions method returns expense transactions using a LINQ query expression with a Date comparison filter. The Lucene Query is using a index named "ExpenseTransactions" for getting the result set. In RavenDB, Indexes are LINQ queries stored in the RavenDB server and would be  executed on the background and will perform query against the JSON documents. Indexes will be working with a lucene query expression or a set operation. Indexes are composed using a Map and Reduce function. Check out Ayende's blog post on Map/Reduce We can create index using RavenDB web admin tool as well as programmitically using its Client API. The below shows the screen shot of creating index using web admin tool. We can also create Indexes using Raven Cleint API as shown in the following code documentStore.DatabaseCommands.PutIndex("ExpenseTransactions",     new IndexDefinition<Expense,Expense>() {     Map = Expenses => from exp in Expenses                     select new { exp.Date } });  In the Map function, we used a Linq expression as shown in the following from exp in docs.Expensesselect new { exp.Date };We have not used a Reduce function for the above index. A Reduce function is useful while performing aggregate functions based on the results from the Map function. Indexes can be use with set operations of RavenDB.SET OperationsUnlike other document databases, RavenDB supports set based operations that lets you to perform updates, deletes and inserts to the bulk_docs endpoint of RavenDB. For doing this, you just pass a query to a Index as shown in the following commandDELETE http://localhost:8080/bulk_docs/ExpenseTransactions?query=Date:20100531The above command using the Index named "ExpenseTransactions" for querying the documents with Date filter and  will delete all the documents that match the query criteria. The above command is equivalent of the following queryDELETE FROM ExpensesWHERE Date='2010-05-31' Controller & ActionsWe have created Expense Repository class for performing CRUD operations for the Expense transactions. Let's create a controller class for handling expense transactions.   public class ExpenseController : Controller { private ICategoryRepository categoyRepository; private IExpenseRepository expenseRepository; public ExpenseController(ICategoryRepository categoyRepository, IExpenseRepository expenseRepository) {     this.categoyRepository = categoyRepository;     this.expenseRepository = expenseRepository; } //Get Expense transactions based on dates public ActionResult Index(DateTime? StartDate, DateTime? EndDate) {     //If date is not passed, take current month's first and last dte     DateTime dtNow;     dtNow = DateTime.Today;     if (!StartDate.HasValue)     {         StartDate = new DateTime(dtNow.Year, dtNow.Month, 1);         EndDate = StartDate.Value.AddMonths(1).AddDays(-1);     }     //take last date of startdate's month, if endate is not passed     if (StartDate.HasValue && !EndDate.HasValue)     {         EndDate = (new DateTime(StartDate.Value.Year, StartDate.Value.Month, 1)).AddMonths(1).AddDays(-1);     }       var expenses = expenseRepository.GetExpenseTransactions(StartDate.Value, EndDate.Value);     if (Request.IsAjaxRequest())     {           return PartialView("ExpenseList", expenses);     }     ViewData.Add("StartDate", StartDate.Value.ToShortDateString());     ViewData.Add("EndDate", EndDate.Value.ToShortDateString());             return View(expenses);            }   // GET: /Expense/Edit public ActionResult Edit(string id) {       var expenseModel = new ExpenseViewModel();     var expense = expenseRepository.Load(id);     ModelCopier.CopyModel(expense, expenseModel);     var categories = categoyRepository.GetCategories();     expenseModel.Category = categories.ToSelectListItems(expense.Category.Id.ToString());                    return View("Save", expenseModel);          }   // // GET: /Expense/Create   public ActionResult Create() {     var expenseModel = new ExpenseViewModel();               var categories = categoyRepository.GetCategories();     expenseModel.Category = categories.ToSelectListItems("-1");     expenseModel.Date = DateTime.Today;     return View("Save", expenseModel); }   // // POST: /Expense/Save // Insert/Update Expense Tansaction [HttpPost] public ActionResult Save(ExpenseViewModel expenseViewModel) {     try     {         if (!ModelState.IsValid)         {               var categories = categoyRepository.GetCategories();                 expenseViewModel.Category = categories.ToSelectListItems(expenseViewModel.CategoryId);                               return View("Save", expenseViewModel);         }           var expense=new Expense();         ModelCopier.CopyModel(expenseViewModel, expense);          expenseRepository.Save(expense, expenseViewModel.CategoryId);                       return RedirectToAction("Index");     }     catch     {         return View();     } } //Delete a Expense Transaction public ActionResult Delete(string id) {     expenseRepository.Delete(id);     return RedirectToAction("Index");     }     }     Download the Source - You can download the source code from http://ravenmvc.codeplex.com

    Read the article

  • An Honest look at SharePoint Web Services

    - by juanlarios
    INTRODUCTION If you are a SharePoint developer you know that there are two basic ways to develop against SharePoint. 1) The object Model 2) Web services. SharePoint object model has the advantage of being quite rich. Anything you can do through the SharePoint UI as an administrator or end user, you can do through the object model. In fact everything that is done through the UI is done through the object model behind the scenes. The major disadvantage to getting at SharePoint this way is that the code needs to run on the server. This means that all web parts, event receivers, features, etc… all of this is code that is deployed to the server. The second way to get to SharePoint is through the built in web services. There are many articles on how to manipulate web services, how to authenticate to them and interact with them. The basic idea is that a remote application or process can contact SharePoint through a web service. Lots has been written about how great these web services are. This article is written to document the limitations, some of the issues and frustrations with working with SharePoint built in web services. Ultimately, for the tasks I was given to , SharePoint built in web services did not suffice. My evaluation of SharePoint built in services was compared against creating my own WCF Services to do what I needed. The current project I'm working on right now involved several "integration points". A remote application, installed on a separate server was to contact SharePoint and perform an task or operation. So I decided to start up Visual Studio and built a DLL and basically have 2 layers of logic. An integration layer and a data layer. A good friend of mine pointed me to SOLID principles and referred me to some videos and tutorials about it. I decided to implement the methodology (although a lot of the principles are common sense and I already incorporated in my coding practices). I was to deliver this dll to the application team and they would simply call the methods exposed by this dll and voila! it would do some task or operation in SharePoint. SOLUTION My integration layer implemented an interface that defined some of the basic integration tasks that I was to put together. My data layer was about the same, it implemented an interface with some of the tasks that I was going to develop. This gave me the opportunity to develop different data layers, ultimately different ways to get at SharePoint if I needed to. This is a classic SOLID principle. In this case it proved to be quite helpful because I wrote one data layer completely implementing SharePoint built in Web Services and another implementing my own WCF Service that I wrote. I should mention there is another layer underneath the data layer. In referencing SharePoint or WCF services in my visual studio project I created a class for every web service call. So for example, if I used List.asx. I created a class called "DocumentRetreival" this class would do the grunt work to connect to the correct URL, It would perform the basic operation of contacting the service and so on. If I used a view.asmx, I implemented a class called "ViewRetrieval" with the same idea as the last class but it would now interact with all he operations in view.asmx. This gave my data layer the ability to perform multiple calls without really worrying about some of the grunt work each class performs. This again, is a classic SOLID principle. So, in order to compare them side by side we can look at both data layers and with is involved in each. Lets take a look at the "Create Project" task or operation. The integration point is described as , "dll is to provide a way to create a project in SharePoint". Projects , in this case are basically document libraries. I am to implement a way in which a remote application can create a document library in SharePoint. Easy enough right? Use the list.asmx Web service in SharePoint. So here we go! Lets take a look at the code. I added the List.asmx web service reference to my project and this is the class that contacts it:  class DocumentRetrieval     {         private ListsSoapClient _service;      d   private bool _impersonation;         public DocumentRetrieval(bool impersonation, string endpt)         {             _service = new ListsSoapClient();             this.SetEndPoint(string.Format("{0}/{1}", endpt, ConfigurationManager.AppSettings["List"]));             _impersonation = impersonation;             if (_impersonation)             {                 _service.ClientCredentials.Windows.ClientCredential.Password = ConfigurationManager.AppSettings["password"];                 _service.ClientCredentials.Windows.ClientCredential.UserName = ConfigurationManager.AppSettings["username"];                 _service.ClientCredentials.Windows.AllowedImpersonationLevel =                     System.Security.Principal.TokenImpersonationLevel.Impersonation;             }     private void SetEndPoint(string p)          {             _service.Endpoint.Address = new EndpointAddress(p);          }          /// <summary>         /// Creates a document library with specific name and templateID         /// </summary>         /// <param name="listName">New list name</param>         /// <param name="templateID">Template ID</param>         /// <returns></returns>         public XmlElement CreateLibrary(string listName, int templateID, ref ExceptionContract exContract)         {             XmlDocument sample = new XmlDocument();             XmlElement viewCol = sample.CreateElement("Empty");             try             {                 _service.Open();                 viewCol = _service.AddList(listName, "", templateID);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/CreateLibrary", ex.GetType(), "Connection Error", ex.StackTrace, ExceptionContract.ExceptionCode.error);                             }finally             {                 _service.Close();             }                                      return viewCol;         } } There was a lot more in this class (that I am not including) because i was reusing the grunt work and making other operations with LIst.asmx, For example, updating content types, changing or configuring lists or document libraries. One of the first things I noticed about working with the built in services is that you are really at the mercy of what is available to you. Before creating a document library (Project) I wanted to expose a IsProjectExisting method. This way the integration or data layer could recognize if a library already exists. Well there is no service call or method available to do that check. So this is what I wrote:   public bool DocLibExists(string listName, ref ExceptionContract exContract)         {             try             {                 var allLists = _service.GetListCollection();                                return allLists.ChildNodes.OfType<XmlElement>().ToList().Exists(x => x.Attributes["Title"].Value ==listName);             }             catch (Exception ex)             {                 exContract = new ExceptionContract("DocumentRetrieval/GetList/GetListWSCall", ex.GetType(), "Unable to Retrieve List Collection", ex.StackTrace, ExceptionContract.ExceptionCode.error);             }             return false;         } This really just gets an XMLElement with all the lists. It was then up to me to sift through the clutter and noise and see if Document library already existed. This took a little bit of getting used to. Now instead of working with code, you are working with XMLElement response format from web service. I wrote a LINQ query to go through and find if the attribute "Title" existed and had a value of the listname then it would return True, if not False. I didn't particularly like working this way. Dealing with XMLElement responses and then having to manipulate it to get at the exact data I was looking for. Once the check for the DocLibExists, was done, I would either create the document library or send back an error indicating the document library already existed. Now lets examine the code that actually creates the document library. It does what you are really after, it creates a document library. Notice how the template ID is really an integer. Every document library template in SharePoint has an ID associated with it. Document libraries, Image Library, Custom List, Project Tasks, etc… they all he a unique integer associated with it. Well, that's great but the client came back to me and gave me some specifics that each "project" or document library, should have. They specified they had 3 types of projects. Each project would have unique views, about 10 views for each project. Each Project specified unique configurations (auditing, versioning, content types, etc…) So what turned out to be a simple implementation of creating a document library as a repository for a project, turned out to be quite involved.  The first thing I thought of was to create a template for document library. There are other ways you can do this too. Using the web Service call, you could configure views, versioning, even content types, etc… the only catch is, you have to be working quite extensively with CAML. I am not fond of CAML. I can do it and work with it, I just don't like doing it. It is quite touchy and at times it is quite tough to understand where errors were made with CAML statements. Working with Web Services and CAML proved to be quite annoying. The service call would return a generic error message that did not particularly point me to a CAML statement syntax error, or even a CAML error. I was not sure if it was a security , performance or code based issue. It was quite tough to work with. At times it was difficult to work with because of the way SharePoint handles metadata. There are "Names", "Display Name", and "StaticName" fields. It was quite tough to understand at times, which one to use. So it took a lot of trial and error. There are tools that can help with CAML generation. There is also now intellisense for CAML statements in Visual Studio that might help but ultimately I'm not fond of CAML with Web Services.   So I decided on the template. So my plan was to create create a document library, configure it accordingly and then use The Template Builder that comes with the SharePoint SDK. This tool allows you to create site templates, list template etc… It is quite interesting because it does not generate an STP file, it actually generates an xml definition and a feature you can activate and make that template available on a site or site collection. The first issue I experienced with this is that one of the specifications to this template was that the "All Documents" view was to have 2 web parts on it. Well, it turns out that using the template builder , it did not include the web parts as part of the list template definition it generated. It backed up the settings, the views, the content types but not the custom web parts. I still decided to try this even without the web parts on the page. This new template defined a new Document library definition with a unique ID. The problem was that the service call accepts an int but it only has access to the built in library int definitions. Any new ones added or created will not be available to create. So this made it impossible for me to approach the problem this way.     I should also mention that one of the nice features about SharePoint is the ability to create list templates, back them up and then create lists based on that template. It can all be done by end user administrators. These templates are quite unique because they are saved as an STP file and not an xml definition. I also went this route and tried to see if there was another service call where I could create a document library based no given template name. Nope! none.      After some thinking I decide to implement a WCF service to do this creation for me. I was quite certain that the object model would allow me to create document libraries base on a template in which an ID was required and also templates saved as STP files. Now I don't want to bother with posting the code to contact WCF service because it's self explanatory, but I will post the code that I used to create a list with custom template. public ServiceResult CreateProject(string name, string templateName, string projectId)         {             string siteurl = SPContext.Current.Site.Url;             Guid webguid = SPContext.Current.Web.ID;                        using (SPSite site = new SPSite(siteurl))             {                 using (SPWeb rootweb = site.RootWeb)                 {                     SPListTemplateCollection temps = site.GetCustomListTemplates(rootweb);                     ProcessWeb(siteurl, webguid, web => Act_CreateProject(web, name, templateName, projectId, temps));                 }//SpWeb             }//SPSite              return _globalResult;                   }         private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                             try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                                       }        private void ProcessWeb(string siteurl, Guid webguid, Action<SPWeb> action) {                        using (SPSite sitecollection = new SPSite(siteurl)) {                 using (SPWeb web = sitecollection.AllWebs[webguid]) {                     action(web);                 }                     }                  } This code is actually some of the code I implemented for the service. there was a lot more I did on Project Creation which I will cover in my next blog post. I implemented an ACTION method to process the web. This allowed me to properly dispose the SPWEb and SPSite objects and not rewrite this code over and over again. So I implemented a WCF service to create projects for me, this allowed me to do a lot more than just create a document library with a template, it now gave me the flexibility to do just about anything the client wanted at project creation. Once this was implemented , the client came back to me and said, "we reference all our projects with ID's in our application. we want SharePoint to do the same". This has been something I have been doing for a little while now but I do hope that SharePoint 2010 can have more of an answer to this and address it properly. I have been adding metadata to SPWebs through property bag. I believe I have blogged about it before. This time it required metadata added to a document library. No problem!!! I also mentioned these web parts that were to go on the "All Documents" View. I took the opportunity to configure them to the appropriate settings. There were two settings that needed to be set on these web parts. One of them was a Project ID configured in the webpart properties. The following code enhances and replaces the "Act_CreateProject " method above:  private void Act_CreateProject(SPWeb targetsite, string name, string templateName, string projectId, SPListTemplateCollection temps) {                         var temp = temps.Cast<SPListTemplate>().FirstOrDefault(x => x.Name.Equals(templateName));             if (temp != null)             {                 SPLimitedWebPartManager wpmgr = null;                               try                 {                                         Guid listGuid = targetsite.Lists.Add(name, "", temp);                     SPList newList = targetsite.Lists[listGuid];                     SPFolder rootFolder = newList.RootFolder;                     rootFolder.Properties.Add(KEY, projectId);                     rootFolder.Update();                     if (rootFolder.ParentWeb != targetsite)                         rootFolder.ParentWeb.Dispose();                     if (!templateName.Contains("Natural"))                     {                         SPView alldocumentsview = newList.Views.Cast<SPView>().FirstOrDefault(x => x.Title.Equals(ALLDOCUMENTS));                         SPFile alldocfile = targetsite.GetFile(alldocumentsview.ServerRelativeUrl);                         wpmgr = alldocfile.GetLimitedWebPartManager(PersonalizationScope.Shared);                         ConfigureWebPart(wpmgr, projectId, CUSTOMWPNAME);                                              alldocfile.Update();                     }                                        if (newList.ParentWeb != targetsite)                         newList.ParentWeb.Dispose();                     _globalResult = new ServiceResult(true, "Success", "Success");                 }                 catch (Exception ex)                 {                     _globalResult = new ServiceResult(false, (string.IsNullOrEmpty(ex.Message) ? "None" : ex.Message + " " + templateName), ex.StackTrace.ToString());                 }                 finally                 {                     if (wpmgr != null)                     {                         wpmgr.Web.Dispose();                         wpmgr.Dispose();                     }                 }             }                         }       private void ConfigureWebPart(SPLimitedWebPartManager mgr, string prjId, string webpartname)         {             var wp = mgr.WebParts.Cast<System.Web.UI.WebControls.WebParts.WebPart>().FirstOrDefault(x => x.DisplayTitle.Equals(webpartname));             if (wp != null)             {                           (wp as ListRelationshipWebPart.ListRelationshipWebPart).ProjectID = prjId;                 mgr.SaveChanges(wp);             }         }   This Shows you how I was able to set metadata on the document library. It has to be added to the RootFolder of the document library, Unfortunately, the SPList does not have a Property bag that I can add a key\value pair to. It has to be done on the root folder. Now everything in the integration will reference projects by ID's and will not care about names. My, "DocLibExists" will now need to be changed because a web service is not set up to look at property bags.  I had to write another method on the Service to do the equivalent but with ID's instead of names.  The second thing you will notice about the code is the use of the Webpartmanager. I have seen several examples online, and also read a lot about memory leaks, The above code does not produce memory leaks. The web part manager creates an SPWeb, so just dispose it like I did. CONCLUSION This is a long long post so I will stop here for now, I will continue with more comparisons and limitations in my next post. My conclusion for this example is that Web Services will do the trick if you can suffer through CAML and if you are doing some simple operations. For Everything else, there's WCF! **** fireI apologize for the disorganization of this post, I was on a bus on a 12 hour trip to IOWA while I wrote it, I was half asleep and half awake, hopefully it makes enough sense to someone.

    Read the article

  • WLS MBeans

    - by Jani Rautiainen
    WLS provides a set of Managed Beans (MBeans) to configure, monitor and manage WLS resources. We can use the WLS MBeans to automate some of the tasks related to the configuration and maintenance of the WLS instance. The MBeans can be accessed a number of ways; using various UIs and programmatically using Java or WLST Python scripts.For customization development we can use the features to e.g. manage the deployed customization in MDS, control logging levels, automate deployment of dependent libraries etc. This article is an introduction on how to access and use the WLS MBeans. The goal is to illustrate the various access methods in a single article; the details of the features are left to the linked documentation.This article covers Windows based environment, steps for Linux would be similar however there would be some differences e.g. on how the file paths are defined. MBeansThe WLS MBeans can be categorized to runtime and configuration MBeans.The Runtime MBeans can be used to access the runtime information about the server and its resources. The data from runtime beans is only available while the server is running. The runtime beans can be used to e.g. check the state of the server or deployment.The Configuration MBeans contain information about the configuration of servers and resources. The configuration of the domain is stored in the config.xml file and the configuration MBeans can be used to access and modify the configuration data. For more information on the WLS MBeans refer to: Understanding WebLogic Server MBeans WLS MBean reference Java Management Extensions (JMX)We can use JMX APIs to access the WLS MBeans. This allows us to create Java programs to configure, monitor, and manage WLS resources. In order to use the WLS MBeans we need to add the following library into the class-path: WL_HOME\lib\wljmxclient.jar Connecting to a WLS MBean server The WLS MBeans are contained in a Mbean server, depending on the requirement we can connect to (MBean Server / JNDI Name): Domain Runtime MBean Server weblogic.management.mbeanservers.domainruntime Runtime MBean Server weblogic.management.mbeanservers.runtime Edit MBean Server weblogic.management.mbeanservers.edit To connect to the WLS MBean server first we need to create a map containing the credentials; Hashtable<String, String> param = new Hashtable<String, String>(); param.put(Context.SECURITY_PRINCIPAL, "weblogic");        param.put(Context.SECURITY_CREDENTIALS, "weblogic1");        param.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); These define the user, password and package containing the protocol. Next we create the connection: JMXServiceURL serviceURL =     new JMXServiceURL("t3","127.0.0.1",7101,     "/jndi/weblogic.management.mbeanservers.domainruntime"); JMXConnector connector = JMXConnectorFactory.connect(serviceURL, param); MBeanServerConnection connection = connector.getMBeanServerConnection(); With the connection we can now access the MBeans for the WLS instance. For a complete example see Appendix A of this post. For more details refer to Accessing WebLogic Server MBeans with JMX Accessing WLS MBeans The WLS MBeans are structured hierarchically; in order to access content we need to know the path to the MBean we are interested in. The MBean is accessed using “MBeanServerConnection. getAttribute” API.  WLS provides entry points to the hierarchy allowing us to navigate all the WLS MBeans in the hierarchy (MBean Server / JMX object name): Domain Runtime MBean Server com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean Runtime MBean Servers com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean Edit MBean Server com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean For example we can access the Domain Runtime MBean using: ObjectName service = new ObjectName( "com.bea:Name=DomainRuntimeService," + "Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean"); Same syntax works for any “child” WLS MBeans e.g. to find out all application deployments we can: ObjectName domainConfig = (ObjectName)connection.getAttribute(service,"DomainConfiguration"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); Alternatively we could access the same MBean using the full syntax: ObjectName domainConfig = new ObjectName("com.bea:Location=DefaultDomain,Name=DefaultDomain,Type=Domain"); ObjectName[] appDeployments = (ObjectName[])connection.getAttribute(domainConfig,"AppDeployments"); For more details refer to Accessing WebLogic Server MBeans with JMX Invoking operations on WLS MBeans The WLS MBean operations can be invoked with MBeanServerConnection. invoke API; in the following example we query the state of “AppsLoggerService” application: ObjectName appRuntimeStateRuntime = new ObjectName("com.bea:Name=AppRuntimeStateRuntime,Type=AppRuntimeStateRuntime"); Object[] parameters = { "AppsLoggerService", "DefaultServer" }; String[] signature = { "java.lang.String", "java.lang.String" }; String result = (String)connection.invoke(appRuntimeStateRuntime,"getCurrentState",parameters, signature); The result returned should be "STATE_ACTIVE" assuming the "AppsLoggerService" application is up and running. WebLogic Scripting Tool (WLST) The WebLogic Scripting Tool (WLST) is a command-line scripting environment that we can access the same WLS MBeans. The tool is located under: $MW_HOME\oracle_common\common\bin\wlst.bat Do note that there are several instances of the wlst script under the $MW_HOME, each of them works, however the commands available vary, so we want to use the one under “oracle_common”. The tool is started in offline mode. In offline mode we can access and manipulate the domain configuration. In online mode we can access the runtime information. We connect to the Administration Server : connect("weblogic","weblogic1", "t3://127.0.0.1:7101") In both online and offline modes we can navigate the WLS MBean using commands like "ls" to print content and "cd" to navigate between objects, for example: All the commands available can be obtained with: help('all') For details of the tool refer to WebLogic Scripting Tool and for the commands available WLST Command and Variable Reference. Also do note that the WLST tool can be invoked from Java code in Embedded Mode. Running Scripts The WLST tool allows us to automate tasks using Python scripts in Script Mode. The script can be manually created or recorded by the WLST tool. Example commands of recording a script: startRecording("c:/temp/recording.py") <commands that we want to record> stopRecording() We can run the script from WLST: execfile("c:/temp/recording.py") We can also run the script from the command line: C:\apps\Oracle\Middleware\oracle_common\common\bin\wlst.cmd c:/temp/recording.py There are various sample scripts are provided with the WLS instance. UI to Access the WLS MBeans There are various UIs through which we can access the WLS MBeans. Oracle Enterprise Manager Fusion Middleware Control Oracle WebLogic Server Administration Console Fusion Middleware Control MBean Browser In the integrated JDeveloper environment only the Oracle WebLogic Server Administration Console is available to us. For more information refer to the documentation, one noteworthy feature in the console is the ability to record WLST scripts based on the navigation. In addition to the UIs above the JConsole included in the JDK can be used to access the WLS MBeans. The JConsole needs to be started with specific parameter to force WLS objects to be used and jar files in the classpath: "C:\apps\Oracle\Middleware\jdk160_24\bin\jconsole" -J-Djava.class.path=C:\apps\Oracle\Middleware\jdk160_24\lib\jconsole.jar;C:\apps\Oracle\Middleware\jdk160_24\lib\tools.jar;C:\apps\Oracle\Middleware\wlserver_10.3\server\lib\wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote For more details refer to the Accessing Custom MBeans from JConsole. Summary In this article we have covered various ways we can access and use the WLS MBeans in context of integrated WLS in JDeveloper to be used for Fusion Application customization development. References Developing Custom Management Utilities With JMX for Oracle WebLogic Server Accessing WebLogic Server MBeans with JMX WebLogic Server MBean Reference WebLogic Scripting Tool WLST Command and Variable Reference Appendix A package oracle.apps.test; import java.io.IOException;import java.net.MalformedURLException;import java.util.Hashtable;import javax.management.MBeanServerConnection;import javax.management.MalformedObjectNameException;import javax.management.ObjectName;import javax.management.remote.JMXConnector;import javax.management.remote.JMXConnectorFactory;import javax.management.remote.JMXServiceURL;import javax.naming.Context;/** * This class contains simple examples on how to access WLS MBeans using JMX. */public class BlogExample {    /**     * Connection to the WLS MBeans     */    private MBeanServerConnection connection;    /**     * Constructor that takes in the connection information for the      * domain and obtains the resources from WLS MBeans using JMX.     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     */    public BlogExample(String hostName, String port, String userName,                       String password) {        super();        try {            initConnection(hostName, port, userName, password);        } catch (Exception e) {            throw new RuntimeException("Unable to connect to the domain " +                                       hostName + ":" + port);        }    }    /**     * Default constructor.     * Tries to create connection with default values. Runtime exception will be     * thrown if the default values are not used in the local instance.     */    public BlogExample() {        this("127.0.0.1", "7101", "weblogic", "weblogic1");    }    /**     * Initializes the JMX connection to the WLS Beans     * @param hostName host name to connect to for the WLS server     * @param port port to connect to for the WLS server     * @param userName user name to connect to for the WLS server     * @param password password to connect to for the WLS server     * @throws IOException error connecting to the WLS MBeans     * @throws MalformedURLException error connecting to the WLS MBeans     * @throws MalformedObjectNameException error connecting to the WLS MBeans     */    private void initConnection(String hostName, String port, String userName,                                String password)                                 throws IOException, MalformedURLException,                                        MalformedObjectNameException {        String protocol = "t3";        String jndiroot = "/jndi/";        String mserver = "weblogic.management.mbeanservers.domainruntime";        JMXServiceURL serviceURL =            new JMXServiceURL(protocol, hostName, Integer.valueOf(port),                              jndiroot + mserver);        Hashtable<String, String> h = new Hashtable<String, String>();        h.put(Context.SECURITY_PRINCIPAL, userName);        h.put(Context.SECURITY_CREDENTIALS, password);        h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,              "weblogic.management.remote");        JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);        connection = connector.getMBeanServerConnection();    }    /**     * Main method used to invoke the logic for testing     * @param args arguments passed to the program     */    public static void main(String[] args) {        BlogExample blogExample = new BlogExample();        blogExample.testEntryPoint();        blogExample.testDirectAccess();        blogExample.testInvokeOperation();    }    /**     * Example of using an entry point to navigate the WLS MBean hierarchy.     */    public void testEntryPoint() {        try {            System.out.println("testEntryPoint");            ObjectName service =             new ObjectName("com.bea:Name=DomainRuntimeService,Type=" +"weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");            ObjectName domainConfig =                (ObjectName)connection.getAttribute(service,                                                    "DomainConfiguration");            ObjectName[] appDeployments =                (ObjectName[])connection.getAttribute(domainConfig,                                                      "AppDeployments");            for (ObjectName appDeployment : appDeployments) {                String resourceIdentifier =                    (String)connection.getAttribute(appDeployment,                                                    "SourcePath");                System.out.println(resourceIdentifier);            }        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of accessing WLS MBean directly with a full reference.     * This does the same thing as testEntryPoint in slightly difference way.     */    public void testDirectAccess() {        try {            System.out.println("testDirectAccess");            ObjectName appDeployment =                new ObjectName("com.bea:Location=DefaultDomain,"+                               "Name=AppsLoggerService,Type=AppDeployment");            String resourceIdentifier =                (String)connection.getAttribute(appDeployment, "SourcePath");            System.out.println(resourceIdentifier);        } catch (Exception e) {            throw new RuntimeException(e);        }    }    /**     * Example of invoking operation on a WLS MBean.     */    public void testInvokeOperation() {        try {            System.out.println("testInvokeOperation");            ObjectName appRuntimeStateRuntime =                new ObjectName("com.bea:Name=AppRuntimeStateRuntime,"+                               "Type=AppRuntimeStateRuntime");            String identifier = "AppsLoggerService";            String serverName = "DefaultServer";            Object[] parameters = { identifier, serverName };            String[] signature = { "java.lang.String", "java.lang.String" };            String result =                (String)connection.invoke(appRuntimeStateRuntime, "getCurrentState",                                          parameters, signature);            System.out.println("State of " + identifier + " = " + result);        } catch (Exception e) {            throw new RuntimeException(e);        }    }}

    Read the article

  • ASP.NET Frameworks and Raw Throughput Performance

    - by Rick Strahl
    A few days ago I had a curious thought: With all these different technologies that the ASP.NET stack has to offer, what's the most efficient technology overall to return data for a server request? When I started this it was mere curiosity rather than a real practical need or result. Different tools are used for different problems and so performance differences are to be expected. But still I was curious to see how the various technologies performed relative to each just for raw throughput of the request getting to the endpoint and back out to the client with as little processing in the actual endpoint logic as possible (aka Hello World!). I want to clarify that this is merely an informal test for my own curiosity and I'm sharing the results and process here because I thought it was interesting. It's been a long while since I've done any sort of perf testing on ASP.NET, mainly because I've not had extremely heavy load requirements and because overall ASP.NET performs very well even for fairly high loads so that often it's not that critical to test load performance. This post is not meant to make a point  or even come to a conclusion which tech is better, but just to act as a reference to help understand some of the differences in perf and give a starting point to play around with this yourself. I've included the code for this simple project, so you can play with it and maybe add a few additional tests for different things if you like. Source Code on GitHub I looked at this data for these technologies: ASP.NET Web API ASP.NET MVC WebForms ASP.NET WebPages ASMX AJAX Services  (couldn't get AJAX/JSON to run on IIS8 ) WCF Rest Raw ASP.NET HttpHandlers It's quite a mixed bag, of course and the technologies target different types of development. What started out as mere curiosity turned into a bit of a head scratcher as the results were sometimes surprising. What I describe here is more to satisfy my curiosity more than anything and I thought it interesting enough to discuss on the blog :-) First test: Raw Throughput The first thing I did is test raw throughput for the various technologies. This is the least practical test of course since you're unlikely to ever create the equivalent of a 'Hello World' request in a real life application. The idea here is to measure how much time a 'NOP' request takes to return data to the client. So for this request I create the simplest Hello World request that I could come up for each tech. Http Handler The first is the lowest level approach which is an HTTP handler. public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public bool IsReusable { get { return true; } } } WebForms Next I added a couple of ASPX pages - one using CodeBehind and one using only a markup page. The CodeBehind page simple does this in CodeBehind without any markup in the ASPX page: public partial class HelloWorld_CodeBehind : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello World. Time is: " + DateTime.Now.ToString() ); Response.End(); } } while the Markup page only contains some static output via an expression:<%@ Page Language="C#" AutoEventWireup="false" CodeBehind="HelloWorld_Markup.aspx.cs" Inherits="AspNetFrameworksPerformance.HelloWorld_Markup" %> Hello World. Time is <%= DateTime.Now %> ASP.NET WebPages WebPages is the freestanding Razor implementation of ASP.NET. Here's the simple HelloWorld.cshtml page:Hello World @DateTime.Now WCF REST WCF REST was the token REST implementation for ASP.NET before WebAPI and the inbetween step from ASP.NET AJAX. I'd like to forget that this technology was ever considered for production use, but I'll include it here. Here's an OperationContract class: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World" + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } } WCF REST can return arbitrary results by returning a Stream object and a content type. The code above turns the string result into a stream and returns that back to the client. ASP.NET AJAX (ASMX Services) I also wanted to test ASP.NET AJAX services because prior to WebAPI this is probably still the most widely used AJAX technology for the ASP.NET stack today. Unfortunately I was completely unable to get this running on my Windows 8 machine. Visual Studio 2012  removed adding of ASP.NET AJAX services, and when I tried to manually add the service and configure the script handler references it simply did not work - I always got a SOAP response for GET and POST operations. No matter what I tried I always ended up getting XML results even when explicitly adding the ScriptHandler. So, I didn't test this (but the code is there - you might be able to test this on a Windows 7 box). ASP.NET MVC Next up is probably the most popular ASP.NET technology at the moment: MVC. Here's the small controller: public class MvcPerformanceController : Controller { public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } } ASP.NET WebAPI Next up is WebAPI which looks kind of similar to MVC. Except here I have to use a StringContent result to return the response: public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } } Testing Take a minute to think about each of the technologies… and take a guess which you think is most efficient in raw throughput. The fastest should be pretty obvious, but the others - maybe not so much. The testing I did is pretty informal since it was mainly to satisfy my curiosity - here's how I did this: I used Apache Bench (ab.exe) from a full Apache HTTP installation to run and log the test results of hitting the server. ab.exe is a small executable that lets you hit a URL repeatedly and provides counter information about the number of requests, requests per second etc. ab.exe and the batch file are located in the \LoadTests folder of the project. An ab.exe command line  looks like this: ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld which hits the specified URL 100,000 times with a load factor of 20 concurrent requests. This results in output like this:   It's a great way to get a quick and dirty performance summary. Run it a few times to make sure there's not a large amount of varience. You might also want to do an IISRESET to clear the Web Server. Just make sure you do a short test run to warm up the server first - otherwise your first run is likely to be skewed downwards. ab.exe also allows you to specify headers and provide POST data and many other things if you want to get a little more fancy. Here all tests are GET requests to keep it simple. I ran each test: 100,000 iterations Load factor of 20 concurrent connections IISReset before starting A short warm up run for API and MVC to make sure startup cost is mitigated Here is the batch file I used for the test: IISRESET REM make sure you add REM C:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin REM to your path so ab.exe can be found REM Warm up ab.exe -n100 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJsonab.exe -n100 -c20 http://localhost/aspnetperf/api/HelloWorldJson ab.exe -n100 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld ab.exe -n100000 -c20 http://localhost/aspnetperf/handler.ashx > handler.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_CodeBehind.aspx > AspxCodeBehind.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/HelloWorld_Markup.aspx > AspxMarkup.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorld > Wcf.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldCode > Mvc.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorld > WebApi.txt I ran each of these tests 3 times and took the average score for Requests/second, with the machine otherwise idle. I did see a bit of variance when running many tests but the values used here are the medians. Part of this has to do with the fact I ran the tests on my local machine - result would probably more consistent running the load test on a separate machine hitting across the network. I ran these tests locally on my laptop which is a Dell XPS with quad core Sandibridge I7-2720QM @ 2.20ghz and a fast SSD drive on Windows 8. CPU load during tests ran to about 70% max across all 4 cores (IOW, it wasn't overloading the machine). Ideally you can try running these tests on a separate machine hitting the local machine. If I remember correctly IIS 7 and 8 on client OSs don't throttle so the performance here should be Results Ok, let's cut straight to the chase. Below are the results from the tests… It's not surprising that the handler was fastest. But it was a bit surprising to me that the next fastest was WebForms and especially Web Forms with markup over a CodeBehind page. WebPages also fared fairly well. MVC and WebAPI are a little slower and the slowest by far is WCF REST (which again I find surprising). As mentioned at the start the raw throughput tests are not overly practical as they don't test scripting performance for the HTML generation engines or serialization performances of the data engines. All it really does is give you an idea of the raw throughput for the technology from time of request to reaching the endpoint and returning minimal text data back to the client which indicates full round trip performance. But it's still interesting to see that Web Forms performs better in throughput than either MVC, WebAPI or WebPages. It'd be interesting to try this with a few pages that actually have some parsing logic on it, but that's beyond the scope of this throughput test. But what's also amazing about this test is the sheer amount of traffic that a laptop computer is handling. Even the slowest tech managed 5700 requests a second, which is one hell of a lot of requests if you extrapolate that out over a 24 hour period. Remember these are not static pages, but dynamic requests that are being served. Another test - JSON Data Service Results The second test I used a JSON result from several of the technologies. I didn't bother running WebForms and WebPages through this test since that doesn't make a ton of sense to return data from the them (OTOH, returning text from the APIs didn't make a ton of sense either :-) In these tests I have a small Person class that gets serialized and then returned to the client. The Person class looks like this: public class Person { public Person() { Id = 10; Name = "Rick"; Entered = DateTime.Now; } public int Id { get; set; } public string Name { get; set; } public DateTime Entered { get; set; } } Here are the updated handler classes that use Person: Handler public class Handler : IHttpHandler { public void ProcessRequest(HttpContext context) { var action = context.Request.QueryString["action"]; if (action == "json") JsonRequest(context); else TextRequest(context); } public void TextRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World. Time is: " + DateTime.Now.ToString()); } public void JsonRequest(HttpContext context) { var json = JsonConvert.SerializeObject(new Person(), Formatting.None); context.Response.ContentType = "application/json"; context.Response.Write(json); } public bool IsReusable { get { return true; } } } This code adds a little logic to check for a action query string and route the request to an optional JSON result method. To generate JSON, I'm using the same JSON.NET serializer (JsonConvert.SerializeObject) used in Web API to create the JSON response. WCF REST   [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class WcfService { [OperationContract] [WebGet] public Stream HelloWorld() { var data = Encoding.Unicode.GetBytes("Hello World " + DateTime.Now.ToString()); var ms = new MemoryStream(data); // Add your operation implementation here return ms; } [OperationContract] [WebGet(ResponseFormat=WebMessageFormat.Json,BodyStyle=WebMessageBodyStyle.WrappedRequest)] public Person HelloWorldJson() { // Add your operation implementation here return new Person(); } } For WCF REST all I have to do is add a method with the Person result type.   ASP.NET MVC public class MvcPerformanceController : Controller { // // GET: /MvcPerformance/ public ActionResult Index() { return View(); } public ActionResult HelloWorldCode() { return new ContentResult() { Content = "Hello World. Time is: " + DateTime.Now.ToString() }; } public JsonResult HelloWorldJson() { return Json(new Person(), JsonRequestBehavior.AllowGet); } } For MVC all I have to do for a JSON response is return a JSON result. ASP.NET internally uses JavaScriptSerializer. ASP.NET WebAPI public class WebApiPerformanceController : ApiController { [HttpGet] public HttpResponseMessage HelloWorldCode() { return new HttpResponseMessage() { Content = new StringContent("Hello World. Time is: " + DateTime.Now.ToString(), Encoding.UTF8, "text/plain") }; } [HttpGet] public Person HelloWorldJson() { return new Person(); } [HttpGet] public HttpResponseMessage HelloWorldJson2() { var response = new HttpResponseMessage(HttpStatusCode.OK); response.Content = new ObjectContent<Person>(new Person(), GlobalConfiguration.Configuration.Formatters.JsonFormatter); return response; } } Testing and Results To run these data requests I used the following ab.exe commands:REM JSON RESPONSES ab.exe -n100000 -c20 http://localhost/aspnetperf/Handler.ashx?action=json > HandlerJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/MvcPerformance/HelloWorldJson > MvcJson.txt ab.exe -n100000 -c20 http://localhost/aspnetperf/api/HelloWorldJson > WebApiJson.txt ab.exe -n100000 -c20 http://localhost/AspNetPerf/WcfService.svc/HelloWorldJson > WcfJson.txt The results from this test run are a bit interesting in that the WebAPI test improved performance significantly over returning plain string content. Here are the results:   The performance for each technology drops a little bit except for WebAPI which is up quite a bit! From this test it appears that WebAPI is actually significantly better performing returning a JSON response, rather than a plain string response. Snag with Apache Benchmark and 'Length Failures' I ran into a little snag with Apache Benchmark, which was reporting failures for my Web API requests when serializing. As the graph shows performance improved significantly from with JSON results from 5580 to 6530 or so which is a 15% improvement (while all others slowed down by 3-8%). However, I was skeptical at first because the WebAPI test reports showed a bunch of errors on about 10% of the requests. Check out this report: Notice the Failed Request count. What the hey? Is WebAPI failing on roughly 10% of requests when sending JSON? Turns out: No it's not! But it took some sleuthing to figure out why it reports these failures. At first I thought that Web API was failing, and so to make sure I re-ran the test with Fiddler attached and runiisning the ab.exe test by using the -X switch: ab.exe -n100 -c10 -X localhost:8888 http://localhost/aspnetperf/api/HelloWorldJson which showed that indeed all requests where returning proper HTTP 200 results with full content. However ab.exe was reporting the errors. After some closer inspection it turned out that the dates varying in size altered the response length in dynamic output. For example: these two results: {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.841926-10:00"} {"Id":10,"Name":"Rick","Entered":"2012-09-04T10:57:24.8519262-10:00"} are different in length for the number which results in 68 and 69 bytes respectively. The same URL produces different result lengths which is what ab.exe reports. I didn't notice at first bit the same is happening when running the ASHX handler with JSON.NET result since it uses the same serializer that varies the milliseconds. Moral: You can typically ignore Length failures in Apache Benchmark and when in doubt check the actual output with Fiddler. Note that the other failure values are accurate though. Another interesting Side Note: Perf drops over Time As I was running these tests repeatedly I was finding that performance steadily dropped from a startup peak to a 10-15% lower stable level. IOW, with Web API I'd start out with around 6500 req/sec and in subsequent runs it keeps dropping until it would stabalize somewhere around 5900 req/sec occasionally jumping lower. For these tests this is why I did the IIS RESET and warm up for individual tests. This is a little puzzling. Looking at Process Monitor while the test are running memory very quickly levels out as do handles and threads, on the first test run. Subsequent runs everything stays stable, but the performance starts going downwards. This applies to all the technologies - Handlers, Web Forms, MVC, Web API - curious to see if others test this and see similar results. Doing an IISRESET then resets everything and performance starts off at peak again… Summary As I stated at the outset, these were informal to satiate my curiosity not to prove that any technology is better or even faster than another. While there clearly are differences in performance the differences (other than WCF REST which was by far the slowest and the raw handler which was by far the highest) are relatively minor, so there is no need to feel that any one technology is a runaway standout in raw performance. Choosing a technology is about more than pure performance but also about the adequateness for the job and the easy of implementation. The strengths of each technology will make for any minor performance difference we see in these tests. However, to me it's important to get an occasional reality check and compare where new technologies are heading. Often times old stuff that's been optimized and designed for a time of less horse power can utterly blow the doors off newer tech and simple checks like this let you compare. Luckily we're seeing that much of the new stuff performs well even in V1.0 which is great. To me it was very interesting to see Web API perform relatively badly with plain string content, which originally led me to think that Web API might not be properly optimized just yet. For those that caught my Tweets late last week regarding WebAPI's slow responses was with String content which is in fact considerably slower. Luckily where it counts with serialized JSON and XML WebAPI actually performs better. But I do wonder what would make generic string content slower than serialized code? This stresses another point: Don't take a single test as the final gospel and don't extrapolate out from a single set of tests. Certainly Twitter can make you feel like a fool when you post something immediate that hasn't been fleshed out a little more <blush>. Egg on my face. As a result I ended up screwing around with this for a few hours today to compare different scenarios. Well worth the time… I hope you found this useful, if not for the results, maybe for the process of quickly testing a few requests for performance and charting out a comparison. Now onwards with more serious stuff… Resources Source Code on GitHub Apache HTTP Server Project (ab.exe is part of the binary distribution)© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Skinny controller in ASP.NET MVC 4

    - by thangchung
    Rails community are always inspire a lot of best ideas. I really love this community by the time. One of these is "Fat models and skinny controllers". I have spent a lot of time on ASP.NET MVC, and really I did some miss-takes, because I made the controller so fat. That make controller is really dirty and very hard to maintain in the future. It is violate seriously SRP principle and KISS as well. But how can we achieve that in ASP.NET MVC? That question is really clear after I read "Manning ASP.NET MVC 4 in Action". It is simple that we can separate it into ActionResult, and try to implementing logic and persistence data inside this. In last 2 years, I have read this from Jimmy Bogard blog, but in that time I never had a consideration about it. That's enough for talking now. I just published a sample on ASP.NET MVC 4, implemented on Visual Studio 2012 RC at here. I used EF framework at here for implementing persistence layer, and also use 2 free templates from internet to make the UI for this sample. In this sample, I try to implementing the simple magazine website that managing all articles, categories and news. It is not finished at all in this time, but no problems, because I just show you about how can we make the controller skinny. And I wanna hear more about your ideas. The first thing, I am abstract the base ActionResult class like this:    public abstract class MyActionResult : ActionResult, IEnsureNotNull     {         public abstract void EnsureAllInjectInstanceNotNull();     }     public abstract class ActionResultBase<TController> : MyActionResult where TController : Controller     {         protected readonly Expression<Func<TController, ActionResult>> ViewNameExpression;         protected readonly IExConfigurationManager ConfigurationManager;         protected ActionResultBase (Expression<Func<TController, ActionResult>> expr)             : this(DependencyResolver.Current.GetService<IExConfigurationManager>(), expr)         {         }         protected ActionResultBase(             IExConfigurationManager configurationManager,             Expression<Func<TController, ActionResult>> expr)         {             Guard.ArgumentNotNull(expr, "ViewNameExpression");             Guard.ArgumentNotNull(configurationManager, "ConfigurationManager");             ViewNameExpression = expr;             ConfigurationManager = configurationManager;         }         protected ViewResult GetViewResult<TViewModel>(TViewModel viewModel)         {             var m = (MethodCallExpression)ViewNameExpression.Body;             if (m.Method.ReturnType != typeof(ActionResult))             {                 throw new ArgumentException("ControllerAction method '" + m.Method.Name + "' does not return type ActionResult");             }             var result = new ViewResult             {                 ViewName = m.Method.Name             };             result.ViewData.Model = viewModel;             return result;         }         public override void ExecuteResult(ControllerContext context)         {             EnsureAllInjectInstanceNotNull();         }     } I also have an interface for validation all inject objects. This interface make sure all inject objects that I inject using Autofac container are not null. The implementation of this as below public interface IEnsureNotNull     {         void EnsureAllInjectInstanceNotNull();     } Afterwards, I am just simple implementing the HomePageViewModelActionResult class like this public class HomePageViewModelActionResult<TController> : ActionResultBase<TController> where TController : Controller     {         #region variables & ctors         private readonly ICategoryRepository _categoryRepository;         private readonly IItemRepository _itemRepository;         private readonly int _numOfPage;         public HomePageViewModelActionResult(Expression<Func<TController, ActionResult>> viewNameExpression)             : this(viewNameExpression,                    DependencyResolver.Current.GetService<ICategoryRepository>(),                    DependencyResolver.Current.GetService<IItemRepository>())         {         }         public HomePageViewModelActionResult(             Expression<Func<TController, ActionResult>> viewNameExpression,             ICategoryRepository categoryRepository,             IItemRepository itemRepository)             : base(viewNameExpression)         {             _categoryRepository = categoryRepository;             _itemRepository = itemRepository;             _numOfPage = ConfigurationManager.GetAppConfigBy("NumOfPage").ToInteger();         }         #endregion         #region implementation         public override void ExecuteResult(ControllerContext context)         {             base.ExecuteResult(context);             var cats = _categoryRepository.GetCategories();             var mainViewModel = new HomePageViewModel();             var headerViewModel = new HeaderViewModel();             var footerViewModel = new FooterViewModel();             var mainPageViewModel = new MainPageViewModel();             headerViewModel.SiteTitle = "Magazine Website";             if (cats != null && cats.Any())             {                 headerViewModel.Categories = cats.ToList();                 footerViewModel.Categories = cats.ToList();             }             mainPageViewModel.LeftColumn = BindingDataForMainPageLeftColumnViewModel();             mainPageViewModel.RightColumn = BindingDataForMainPageRightColumnViewModel();             mainViewModel.Header = headerViewModel;             mainViewModel.DashBoard = new DashboardViewModel();             mainViewModel.Footer = footerViewModel;             mainViewModel.MainPage = mainPageViewModel;             GetViewResult(mainViewModel).ExecuteResult(context);         }         public override void EnsureAllInjectInstanceNotNull()         {             Guard.ArgumentNotNull(_categoryRepository, "CategoryRepository");             Guard.ArgumentNotNull(_itemRepository, "ItemRepository");             Guard.ArgumentMustMoreThanZero(_numOfPage, "NumOfPage");         }         #endregion         #region private functions         private MainPageRightColumnViewModel BindingDataForMainPageRightColumnViewModel()         {             var mainPageRightCol = new MainPageRightColumnViewModel();             mainPageRightCol.LatestNews = _itemRepository.GetNewestItem(_numOfPage).ToList();             mainPageRightCol.MostViews = _itemRepository.GetMostViews(_numOfPage).ToList();             return mainPageRightCol;         }         private MainPageLeftColumnViewModel BindingDataForMainPageLeftColumnViewModel()         {             var mainPageLeftCol = new MainPageLeftColumnViewModel();             var items = _itemRepository.GetNewestItem(_numOfPage);             if (items != null && items.Any())             {                 var firstItem = items.First();                 if (firstItem == null)                     throw new NoNullAllowedException("First Item".ToNotNullErrorMessage());                 if (firstItem.ItemContent == null)                     throw new NoNullAllowedException("First ItemContent".ToNotNullErrorMessage());                 mainPageLeftCol.FirstItem = firstItem;                 if (items.Count() > 1)                 {                     mainPageLeftCol.RemainItems = items.Where(x => x.ItemContent != null && x.Id != mainPageLeftCol.FirstItem.Id).ToList();                 }             }             return mainPageLeftCol;         }         #endregion     }  Final step, I get into HomeController and add some line of codes like this [Authorize]     public class HomeController : BaseController     {         [AllowAnonymous]         public ActionResult Index()         {             return new HomePageViewModelActionResult<HomeController>(x=>x.Index());         }         [AllowAnonymous]         public ActionResult Details(int id)         {             return new DetailsViewModelActionResult<HomeController>(x => x.Details(id), id);         }         [AllowAnonymous]         public ActionResult Category(int id)         {             return new CategoryViewModelActionResult<HomeController>(x => x.Category(id), id);         }     } As you see, the code in controller is really skinny, and all the logic I move to the custom ActionResult class. Some people said, it just move the code out of controller and put it to another class, so it is still hard to maintain. Look like it just move the complicate codes from one place to another place. But if you have a look and think it in details, you have to find out if you have code for processing all logic that related to HttpContext or something like this. You can do it on Controller, and try to delegating another logic  (such as processing business requirement, persistence data,...) to custom ActionResult class. Tell me more your thinking, I am really willing to hear all of its from you guys. All source codes can be find out at here. Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="http://weblogs.asp.net//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >