Search Results

Search found 16077 results on 644 pages for 'meeting request'.

Page 100/644 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • eSTEP TechCast - December 2012

    - by Cinzia Mascanzoni
    Join our next eSTEP TechCast will be on Thursday, 06. December 2012, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: Innovations with Oracle Solaris Cluster 4 In this webcast we will focus at the integration of the cluster software with the IPS packaging system of Solaris 11, which makes installing and updating the software much easier and much more reliable, especially with virtualization technologies involved. Our webcast will also reflect new versions of Oracle Solaris Cluster if they will be announced in the meantime.Call Info: Call-in-toll-free number: 08006948154 (United Kingdom) Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3 Security Passcode: 9876 Webex Info (Oracle Web Conference) Meeting Number: 255 760 510 Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. 

    Read the article

  • Speaking in Omaha: December 7, 2011

    - by Bill Graziano
    I’m presenting in Omaha on Writing Faster SQL at 6PM on December 7th.  You can find meeting details on the Omaha SQL Server User Group page. The meeting location requires an RSVP so building security has a list of attendees. The presentation is a series of suggestions on improving performance.  It ranges from simple things like comparing indexed columns to scalar values up to tips for reducing query compiles and asynchronous processing patterns.  Nearly all of these come from specific issues I’ve encountered working on poorly performing SQL Servers.

    Read the article

  • How to build the SQL community

    - by simonsabin
    I’ve been running SQLBits for 5 years and have always had a desire to make the SQL community better. I’ve often thought about running for the board but have never stood up Just over a year ago I was at a meeting with some SQL leaders about growing PASS globally. At that meeting a friend of offered to help the board from an international perspective. I thought he was mad. James runs his own business, has been managing the sponsors for SQLBits and has 3 kids to look after, no way would he have the...(read more)

    Read the article

  • Unlock the Value of Big Data

    - by Mike.Hallett(at)Oracle-BI&EPM
    Partners should read this comprehensive new e-book to get advice from Oracle and industry leaders on how you can use big data to generate new business insights and make better decisions for your customers. “Big data represents an opportunity averaging 14% of current revenue.” —From the Oracle big data e-book, Meeting the Challenge of Big Data You’ll gain instant access to: Straightforward approaches for acquiring, organizing, and analyzing data Architectures and tools needed to integrate new data with your existing investments Survey data revealing how leading companies are using big data, so you can benchmark your progress Expert resources such as white papers, analyst videos, 3-D demos, and more If you want to be ready for the data deluge, Meeting the Challenge of Big Data is a must-read. Register today for the e-book and read it on your computer or Apple iPad.  

    Read the article

  • Ubuntu Australia LoCo Team

    - by benonsoftware
    Well first I would like to add at the moment I have had 1174 page views. That does not count people who just browse via the homepage so add around another 200 visitors! Well back on the subject I have recently joined Ubuntu-AU (Australia) LoCo team and it is great! Basilar it is a Ubuntu loco community group and just before the my first time being at a meeting I was emailing the admin and he said that there was a stop for a team reporter and I put my hand up and he said 'yes'. He was actually going to ask at the team meeting I would have said no way! But because I saw it before hand and I asked him I wanted to do it. If you live in Australia consider join Ubuntu-AU they also have a Launchpad team.

    Read the article

  • Have you considered working for Microsoft in the IT organization?

    - by CatherineRussell
    Microsoft’s IT organization is hosting a Webinar for female technology professionals! Microsoft’s IT organization is hosting a Webinar on Thursday, June 24th at 11:30 AM PDT for female technology professionals interested in meeting some of the dynamic women who work in IT at Microsoft. Four different women from across the organization will share with you their stories and highlight why they have chosen Microsoft’s IT organization as a great place to grow and nurture their careers. This event will can be experienced via Live Meeting and audio conferencing. An RSVP is required to attend this event. To reserve your spot, register here: http://microsoftit.eventbrite.com Upon registration, a confirmation will be sent including additional event details and a FAQ. Find out more about Career Opportunities with Microsoft IT http://www.microsoft-careers.com/content/information-technology/?utm_source=LinkedIn&utm_campaign=Event_Women_in_IT_Webinar_MSIT_US_maryb_6152010

    Read the article

  • Uploading images from Flex to Rails using Paperclip

    - by 23tux
    Hi everyone, I'm looking for a way to upload images that were created in my flex app to rails. I've tried to use paperclip, but it don't seem to work. I've got this tutorial here: http://blog.alexonrails.net/?p=218 The problem is, that they are using a FileReference to browse for files on the clients computer. They call the .upload(...) function and send the data to the upload controller. But I'm using a URLLoader to upload a image, that is modified in my Flex-App. First, here is the code from the tutorial: private function selectHandler(event:Event):void { var vars:URLVariables = new URLVariables(); var request:URLRequest = new URLRequest(uri); request.method = URLRequestMethod.POST; vars.description = "My Description"; request.data = vars; var uploadDataFieldName:String = 'filemanager[file]'; fileReference.upload(request, uploadDataFieldName); } I don't know how to set that var uploadDataFieldName:String = 'filemanager[file]'; in a URLLoader. I've got the image data compressed as a JPEG in a ByteArray. It looks like this: public function savePicture():void { var filename:String = "blubblub.jpg"; var vars:URLVariables = new URLVariables(); vars.position = layoutView.currentPicPosition; vars.url = filename; vars.user_id = 1; vars.comic_id = 1; vars.file_content_type = "image/jpeg"; vars.file_file_name = filename; var rawBytes:ByteArray = new JPGEncoder(75).encode(bitmapdata); vars.picture = rawBytes; var request:URLRequest = new URLRequest(Data.SERVER_ADDR + "pictures/upload"); request.method = URLRequestMethod.POST; request.data = vars; var loader:URLLoader = new URLLoader(request); loader.addEventListener(Event.COMPLETE, savePictureHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, errorHandlerUpload); loader.load(request); } If I set the var.picture URLVariable to the bytearray, then I get the error, that the upload is nil. Here is the Rails part: Picture-Model: require 'paperclip' class Picture < ActiveRecord::Base # relations from picture belongs_to :comic belongs_to :user has_many :picture_bubbles has_many :bubbles, :through => :picture_bubbles # attached file for picture upload -> with paperclip plugin has_attached_file :file, :path => "public/system/pictures/:basename.:extension" end and the picture controller with the upload function: class PicturesController < ApplicationController protect_from_forgery :except => :upload def upload @picture = Picture.new(params[:picture]) @picture.position = params[:position] @picture.comic_id = params[:comic_id] @picture.url = params[:url] @picture.user_id = params[:user_id] if @picture.save render(:nothing => true, :status => 200) else render(:nothing => true, :status => 500) end end end Does anyone know how to solve this problem? thx, tux

    Read the article

  • C# asp.net EF MVC postgresql error 23505: Duplicate key violates unique constraint

    - by user2721755
    EDIT: It was issue with database table - dropping and recreating table id column did the work. Problem solved. I'm trying to build web application, that is connected to postgresql database. Results are displaying in view with Kendo UI. When I'm trying to add new row (with Kendo UI 'Add new record' button), I get error 23505: 'Duplicate key violates unique constraint'. My guess is, that EF takes id to insert from the beginning, not the last one, because after 35 (it's number of rows in table) tries - and errors - adding works perfectly. Can someone help me to understand, what's wrong? Model: using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace MainConfigTest.Models { [Table("mainconfig", Schema = "public")] public class Mainconfig { [Column("id")] [Key] [Editable(false)] public int Id { get; set; } [Column("descr")] [Editable(true)] public string Descr { get; set; } [Column("hibversion")] [Required] [Editable(true)] public long Hibversion { get; set; } [Column("mckey")] [Required] [Editable(true)] public string Mckey { get; set; } [Column("valuexml")] [Editable(true)] public string Valuexml { get; set; } [Column("mcvalue")] [Editable(true)] public string Mcvalue { get; set; } } } Context: using System.Data.Entity; namespace MainConfigTest.Models { public class MainConfigContext : DbContext { public DbSet<Mainconfig> Mainconfig { get; set; } } } Controller: namespace MainConfigTest.Controllers { public class MainConfigController : Controller { #region Properties private Models.MainConfigContext db = new Models.MainConfigContext(); private string mainTitle = "Mainconfig (Kendo UI)"; #endregion #region Initialization public MainConfigController() { ViewBag.MainTitle = mainTitle; } #endregion #region Ajax [HttpGet] public JsonResult GetMainconfig() { int take = HttpContext.Request["take"] == null ? 5 : Convert.ToInt32(HttpContext.Request["take"]); int skip = HttpContext.Request["skip"] == null ? 0 : Convert.ToInt32(HttpContext.Request["skip"]); Array data = (from Models.Mainconfig c in db.Mainconfig select c).OrderBy(c => c.Id).ToArray().Skip(skip).Take(take).ToArray(); return Json(new Models.MainconfigResponse(data, db.Mainconfig.Count()), JsonRequestBehavior.AllowGet); } [HttpPost] public JsonResult Create() { try { Mainconfig itemToAdd = new Mainconfig() { Descr = Convert.ToString(HttpContext.Request["Descr"]), Hibversion = Convert.ToInt64(HttpContext.Request["Hibversion"]), Mckey = Convert.ToString(HttpContext.Request["Mckey"]), Valuexml = Convert.ToString(HttpContext.Request["Valuexml"]), Mcvalue = Convert.ToString(HttpContext.Request["Mcvalue"]) }; db.Mainconfig.Add(itemToAdd); db.SaveChanges(); return Json(new { Success = true }); } catch (InvalidOperationException ex) { return Json(new { Success = false, msg = ex }); } } //other methods } } Kendo UI script in view: <script type="text/javascript"> $(document).ready(function () { $("#config-grid").kendoGrid({ sortable: true, pageable: true, scrollable: false, toolbar: ["create"], editable: { mode: "popup" }, dataSource: { pageSize: 5, serverPaging: true, transport: { read: { url: '@Url.Action("GetMainconfig")', dataType: "json" }, update: { url: '@Url.Action("Update")', type: "Post", dataType: "json", complete: function (e) { $("#config-grid").data("kendoGrid").dataSource.read(); } }, destroy: { url: '@Url.Action("Delete")', type: "Post", dataType: "json" }, create: { url: '@Url.Action("Create")', type: "Post", dataType: "json", complete: function (e) { $("#config-grid").data("kendoGrid").dataSource.read(); } }, }, error: function (e) { if(e.Success == false) { this.cancelChanges(); } }, schema: { data: "Data", total: "Total", model: { id: "Id", fields: { Id: { editable: false, nullable: true }, Descr: { type: "string"}, Hibversion: { type: "number", validation: {required: true,}, }, Mckey: { type: "string", validation: { required: true, }, }, Valuexml:{ type: "string"}, Mcvalue: { type: "string" } } } } }, //end DataSource // generate columns etc. Mainconfig table structure: id serial NOT NULL, descr character varying(200), hibversion bigint NOT NULL, mckey character varying(100) NOT NULL, valuexml character varying(8000), mcvalue character varying(200), CONSTRAINT mainconfig_pkey PRIMARY KEY (id), CONSTRAINT mainconfig_mckey_key UNIQUE (mckey) Any help will be appreciated.

    Read the article

  • Problem with socket communication between C# and Flex

    - by Chris Lee
    Hi all, I am implementing a simulated b/s stock data system. I am using flex and c# for client and server sides. I found flash has a security policy and I handled the policy-file-request in my server code. But seems it doesn't work, because the code jumped out at "socket.Receive(b)" after connection. I've tried sending message on client in the connection handler, in that case the server can receive correct message. But the auto-generated "policy-file-request" can never be received, and the client can get no data sending from server. Here I put my code snippet. my ActionScript code: public class StockClient extends Sprite { private var hostName:String = "192.168.84.103"; private var port:uint = 55555; private var socket:XMLSocket; public function StockClient() { socket = new XMLSocket(); configureListeners(socket); socket.connect(hostName, port); } public function send(data:Object) : void{ socket.send(data); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.CLOSE, closeHandler); dispatcher.addEventListener(Event.CONNECT, connectHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler); dispatcher.addEventListener(ProgressEvent.SOCKET_DATA, dataHandler); } private function closeHandler(event:Event):void { trace("closeHandler: " + event); } private function connectHandler(event:Event):void { trace("connectHandler: " + event); //following testing message can be received, but client can't invoke data handler //send("<policy-file-request/>"); } private function dataHandler(event:ProgressEvent):void { //never fired trace("dataHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { trace("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal); } private function securityErrorHandler(event:SecurityErrorEvent):void { trace("securityErrorHandler: " + event); } } my C# code: const int PORT_NUMBER = 55555; const String BEGIN_REQUEST = "begin"; const String END_REQUEST = "end"; const String POLICY_REQUEST = "<policy-file-request/>\u0000"; const String POLICY_FILE = "<?xml version=\"1.0\"?>\n" + "<!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\">\n" + "<cross-domain-policy> \n" + " <allow-access-from domain=\"*\" to-ports=\"55555\"/> \n" + "</cross-domain-policy>\u0000"; ................ private void startListening() { provider = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); provider.Bind(new IPEndPoint(IPAddress.Parse("192.168.84.103"), PORT_NUMBER)); provider.Listen(10); isListened = true; while (isListened) { Socket socket = provider.Accept(); Console.WriteLine("connect!"); byte[] b = new byte[1024]; int receiveLength = 0; try { // code jump out at this statement receiveLength = socket.Receive(b); } catch (Exception e) { Debug.WriteLine(e.ToString()); } String request = System.Text.Encoding.UTF8.GetString(b, 0, receiveLength); Console.WriteLine("request:"+request); if (request == POLICY_REQUEST) { socket.Send(Encoding.UTF8.GetBytes(POLICY_FILE)); Console.WriteLine("response:" + POLICY_FILE); } else if (request == END_REQUEST) { Dispose(socket); } else { StartSocket(socket); break; } } } Sorry for the long code, please someone help with it, thanks a million

    Read the article

  • How can I pass extra parameters to the routeMatch object?

    - by Marcos Garcia
    I'm trying to unit test a controller, but can't figure out how to pass some extra parameters to the routeMatch object. I followed the posts from tomoram at http://devblog.x2k.co.uk/unit-testing-a-zend-framework-2-controller/ and http://devblog.x2k.co.uk/getting-the-servicemanager-into-the-test-environment-and-dependency-injection/, but when I try to dispatch a request to /album/edit/1, for instance, it throws the following exception: Zend\Mvc\Exception\DomainException: Url plugin requires that controller event compose a router; none found Here is my PHPUnit Bootstrap: class Bootstrap { static $serviceManager; static $di; static public function go() { include 'init_autoloader.php'; $config = include 'config/application.config.php'; // append some testing configuration $config['module_listener_options']['config_static_paths'] = array(getcwd() . '/config/test.config.php'); // append some module-specific testing configuration if (file_exists(__DIR__ . '/config/test.config.php')) { $moduleConfig = include __DIR__ . '/config/test.config.php'; array_unshift($config['module_listener_options']['config_static_paths'], $moduleConfig); } $serviceManager = Application::init($config)->getServiceManager(); self::$serviceManager = $serviceManager; // Setup Di $di = new Di(); $di->instanceManager()->addTypePreference('Zend\ServiceManager\ServiceLocatorInterface', 'Zend\ServiceManager\ServiceManager'); $di->instanceManager()->addTypePreference('Zend\EventManager\EventManagerInterface', 'Zend\EventManager\EventManager'); $di->instanceManager()->addTypePreference('Zend\EventManager\SharedEventManagerInterface', 'Zend\EventManager\SharedEventManager'); self::$di = $di; } static public function getServiceManager() { return self::$serviceManager; } static public function getDi() { return self::$di; } } Bootstrap::go(); Basically, we are creating a Zend\Mvc\Application environment. My PHPUnit_Framework_TestCase is enclosed in a custom class, which goes like this: abstract class ControllerTestCase extends TestCase { /** * The ActionController we are testing * * @var Zend\Mvc\Controller\AbstractActionController */ protected $controller; /** * A request object * * @var Zend\Http\Request */ protected $request; /** * A response object * * @var Zend\Http\Response */ protected $response; /** * The matched route for the controller * * @var Zend\Mvc\Router\RouteMatch */ protected $routeMatch; /** * An MVC event to be assigned to the controller * * @var Zend\Mvc\MvcEvent */ protected $event; /** * The Controller fully qualified domain name, so each ControllerTestCase can create an instance * of the tested controller * * @var string */ protected $controllerFQDN; /** * The route to the controller, as defined in the configuration files * * @var string */ protected $controllerRoute; public function setup() { parent::setup(); $di = \Bootstrap::getDi(); // Create a Controller and set some properties $this->controller = $di->newInstance($this->controllerFQDN); $this->request = new Request(); $this->routeMatch = new RouteMatch(array('controller' => $this->controllerRoute)); $this->event = new MvcEvent(); $this->event->setRouteMatch($this->routeMatch); $this->controller->setEvent($this->event); $this->controller->setServiceLocator(\Bootstrap::getServiceManager()); } public function tearDown() { parent::tearDown(); unset($this->controller); unset($this->request); unset($this->routeMatch); unset($this->event); } } And we create a Controller instance and a Request with a RouteMatch. The code for the test: public function testEditActionWithGetRequest() { // Dispatch the edit action $this->routeMatch->setParam('action', 'edit'); $this->routeMatch->setParam('id', $album->id); $result = $this->controller->dispatch($this->request, $this->response); // rest of the code isn't executed } I'm not sure what I'm missing here. Can it be any configuration for the testing bootstrap? Or should I pass the parameters in some other way? Or am I forgetting to instantiate something?

    Read the article

  • jsp error plz check this code why are not insert my data into database

    - by lekhni
    <%@ page language="java" import="java.util.*" pageEncoding="ISO-8859-1"%> <%@ page import="java.sql.*" %> <%@ page import="java.io.*"%> <%@ page import="java.io.File"%> <%@ page import="java.util.*"%> <%@ page import="java.sql.*"%> <%@ page import="java.sql.Blob"%> <%@ page import="java.sql.PreparedStatement"%> <%@ page import="java.sql.BatchUpdateException"%> <%@ page import="javax.servlet.*"%> <%@ page import="javax.servlet.http.*"%> <% String path = request.getContextPath(); String basePath = request.getScheme()+"://"+request.getServerName()+":"+request.getServerPort()+path+"/"; %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <base href="<%=basePath%>"> <title>My JSP 'p.jsp' starting page</title> <meta http-equiv="pragma" content="no-cache"> <meta http-equiv="cache-control" content="no-cache"> <meta http-equiv="expires" content="0"> <meta http-equiv="keywords" content="keyword1,keyword2,keyword3"> <meta http-equiv="description" content="This is my page"> <!-- <link rel="stylesheet" type="text/css" href="styles.css"> --> </head> <body> <% int activityId = Integer.parseInt(request.getParameter("activityId")); String name = request.getParameter("name"); String activityType = request.getParameter("activityType"); int parentId = Integer.parseInt(request.getParameter("parentId")); String description = request.getParameter("description"); %> <% Connection conn=null; PreparedStatement pstatement = null; try{ Class.forName("com.mysql.jdbc.Driver"); conn=DriverManager.getConnection("jdbc:mysql://localhost:3306/pol","root","1234"); Statement st=conn.createStatement(); String queryString = "INSERT INTO activity(activityId,name,activityType,parentId,description)" + " VALUES (?, ?, ?, ?, ?)"; pstatement = conn.prepareStatement(queryString); pstatement.setInt(1, activityId); pstatement.setString(2, name); pstatement.setString(3, activityType); pstatement.setInt(4, parentId); pstatement.setString(5, description); } catch (Exception ex) { out.println("Unable to connect to batabase."); } finally { pstatement.close(); conn.close(); } %> </body> </html>

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • How can unrealscript halt event handler execution after an arbitrary number of lines with no return or error?

    - by Dan Cowell
    I have created a class that extends TcpLink and is instantiated in a custom Kismet Sequence Action. It is being instantiated correctly and is making the GET HTTP request that I need it to (I have checked my access log in apache) and Apache is responding to the request with the appropriate content. The problem I have is that I'm using the event receive mode and it appears that somehow the handler for the Opened event is halted after a specific number of lines of code have executed. Here is my code for the Opened event: event Opened() { // A connection was established WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } As you can see, a number of the Broadcast calls have been commented out. Initially, only the lines up to the Broadcast containing "[DNomad_TcpLinkClient] Sent request: " were being executed and none of the Broadcasts were commented out. After commenting out that line, the next Broadcast was successful and so on and so forth. As a test, I commented out the very first Broadcast to see if the connection closing had any effect: // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); Upon doing that, an additional Broadcast at the end of the function executed. Thus the inference that there is an upper limit to the number of lines executed. Additionally, my ReceivedText handler is never called, despite Apache returning the correct HTTP 200 response with a body. My working hypothesis is that somehow after the Sequence Action finishes executing the garbage collector cleans up the TcpLinkClient instance. My biggest source of confusion with that is how on earth it does it during the execution of an event handler. Has anyone ever seen anything like this before? My full TcpLinkClient class is below: /* * TcpLinkClient based on an example usage of the TcpLink class by Michiel 'elmuerte' Hendriks for Epic Games, Inc. * */ class DNomad_TcpLinkClient extends TcpLink; var PlayerController PC; var string TargetHost; var int TargetPort; var string path; var string requesttext; var string userId; var string apartmentId; var string statusCode; var string responseData; event PostBeginPlay() { super.PostBeginPlay(); } function DoTcpLinkRequest(string uid, string id) //removes having to send a host { userId = uid; apartmentId = id; Resolve(targethost); } function string GetStatus() { return statusCode; } event Resolved( IpAddr Addr ) { // The hostname was resolved succefully WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] "$TargetHost$" resolved to "$ IpAddrToString(Addr)); // Make sure the correct remote port is set, resolving doesn't set // the port value of the IpAddr structure Addr.Port = TargetPort; //dont comment out this log because it rungs the function bindport WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Bound to port: "$ BindPort() ); if (!Open(Addr)) { WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Open failed"); } } event ResolveFailed() { WorldInfo.Game.Broadcast(self, "[TcpLinkClient] Unable to resolve "$TargetHost); // You could retry resolving here if you have an alternative // remote host. //send failed message to scaleform UI //JunHud(JunPlayerController(PC).myHUD).JunMovie.CallSetHTML("Failed"); } event Opened() { // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } event Closed() { // In this case the remote client should have automatically closed // the connection, because we requested it in the HTTP request. WorldInfo.Game.Broadcast(self, "Connection closed."); // After the connection was closed we could establish a new // connection using the same TcpLink instance. } event ReceivedText( string Text ) { WorldInfo.Game.Broadcast(self, "Received Text: "$Text); //we dont want the header info, so we split the string after two new lines Text = Split(Text, chr(13)$chr(10)$chr(13)$chr(10), true); WorldInfo.Game.Broadcast(self, "Split Text: "$Text); statusCode = Text; } event ReceivedLine( string Line ) { WorldInfo.Game.Broadcast(self, "Received Line: "$Line); } event ReceivedBinary( int Count, byte B[255] ) { WorldInfo.Game.Broadcast(self, "Received Binary of length: "$Count); } defaultproperties { TargetHost="127.0.0.1" TargetPort=80 //default for HTTP LinkMode=MODE_Text ReceiveMode=RMODE_Event path = "dnomad/datafeed.php" userId = "0"; apartmentId = "0"; statusCode = ""; send = false; }

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Get the currently saved object in a view in Django

    - by mridang
    Hi, I has a Django view which is accessed through an AJAX call. It's a pretty simple one — all it does is simply pass the request to a form object and save the data. Here's a snippet from my view: form = AddSiteForm(request.user, request.POST) if form.is_valid(): obj = form.save(commit=False) obj.user = request.user obj.save() data['status'] = 'success' data['html'] = render_to_string('site.html', locals(), context_instance=RequestContext(request)) return HttpResponse(simplejson.dumps(data), mimetype='application/json') How do I get the currently saved object (including the internally generated id column) and pass it to the template? Any help guys? Mridang

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • client ip address in ASP.NET (.asmx) webservices

    - by Zain Shaikh
    I am using ASP.Net (.asmx) web services with Silverlight. since there is no way to find client ip address in Silverlight, therefore I had to log this on service end. these are some methods I have tried: Request.ServerVariables(”REMOTE_HOST”) HttpContext.Current.Request.ServerVariables["REMOTE_ADDR"] HttpContext.Current.Request.ServerVariables["HTTP_X_FORWARDED_FOR"]; Request.UserHostAddress() Request.UserHostName() string strHostName = System.Net.Dns.GetHostName(); string clientIPAddress = System.Net.Dns.GetHostAddresses(strHostName).GetValue(0).ToString(); All the above methods work fine on my local system, but when I publish my service on production server. it starts giving errors. Error: Object reference not set to an instance of an object. StackTrace: at System.Web.Hosting.ISAPIWorkerRequestInProc.GetAdditionalServerVar(Int32 index) at System.Web.Hosting.ISAPIWorkerRequestInProc.GetServerVariable(String name) at System.Web.Hosting.ISAPIWorkerRequest.GetRemoteAddress() at System.Web.HttpRequest.get_UserHostAddress()

    Read the article

  • Upload Image with Django Model Form

    - by jmitchel3
    I'm having difficulty uploading the following model with model form. I can upload fine in the admin but that's not all that useful for a project that limits admin access. #Models.py class Profile(models.Model): name = models.CharField(max_length=128) user = models.ForeignKey(User) profile_pic = models.ImageField(upload_to='img/profile/%Y/%m/') #views.py def create_profile(request): try: profile = Profile.objects.get(user=request.user) except: pass form = CreateProfileForm(request.POST or None, instance=profile) if form.is_valid(): new = form.save(commit=False) new.user = request.user new.save() return render_to_response('profile.html', locals(), context_instance=RequestContext(request)) #Profile.html <form enctype="multipart/form-data" method="post">{% csrf_token %} <tr><td>{{ form.as_p }}</td></tr> <tr><td><button type="submit" class="btn">Submit</button></td></tr> </form> Note: All the other data in the form saves perfectly well, the photo does not upload at all. Thank you for your help!

    Read the article

  • GIgya Native Login without using Facebook and Twitter Account

    - by Wodjefer
    Following code that i am using to login but i am getting a Null response - (IBAction) signInPressed : (id)sender { [super signInPressed:sender]; NSLog(@"ACCLoginViewController_iPhone Sign-IN Pressed"); //Load the Gigya login UI component, passing this View Controller as a delegate. GSRequest *request = [GSRequest requestForMethod:@"accounts.login"]; [request.parameters setObject:self.emailField.text forKey:@"loginID"]; [request.parameters setObject:self.passwordField.text forKey:@"password"]; request.parameters[@"loginID"] = @"email"; [request sendWithResponseHandler:^(GSResponse *response, NSError *error) { if (!error) { NSLog(@"the resposne = %@",response); } else { // Check the error code according to the GSErrorCode enum, and handle it. NSLog(@"the Error = %@",error.description); } }]; // [self loadTabbar]; }

    Read the article

  • Django: Prefill a ManytoManyField

    - by Emile Petrone
    I have a ManyToManyField on a settings page that isn't rendering. The data was filled when the user registered, and I am trying to prefill that data when the user tries to change it. Thanks in advance for the help! The HTML: {{form.types.label}} {% if add %} {{form.types}} {% else %} {% for type in form.types.all %} {{type.description}} {% endfor %} {% endif %} The View: @csrf_protect @login_required def edit_host(request, host_id, template_name="host/newhost.html"): host = get_object_or_404(Host, id=host_id) if request.user != host.user: return HttpResponseForbidden() form = HostForm(request.POST) if form.is_valid(): if request.method == 'POST': if form.cleaned_data.get('about') is not None: host.about = form.cleaned_data.get('about') if form.cleaned_data.get('types') is not None: host.types = form.cleaned_data.get('types') host.save() form.save_m2m() return HttpResponseRedirect('/users/%d/' % host.user.id) else: form = HostForm(initial={ "about":host.about, "types":host.types, }) data = { "host":host, "form":form } return render_to_response(template_name, data, context_instance=RequestContext(request)) Form: class HostForm(forms.ModelForm): class Meta: model = Host fields = ('types', 'about', ) types = forms.ModelMultipleChoiceField( widget=forms.CheckboxSelectMultiple, queryset=Type.objects.all(), required=True) about = forms.CharField( widget=forms.Textarea(), required=True) def __init__(self, *args, **kwargs): super(HostForm, self).__init__(*args, **kwargs) self.fields['about'].widget.attrs = { 'placeholder':'Hello!'}

    Read the article

  • asp.net client/browser url

    - by Marcus King
    I'm wondering how I can get the url from the browser in asp.net. I have a page that I use globalization/localization for and I am redirecting (via server not code) from www.spanishversion.com to www.englishversion.com but the url is masked to still say www.spanishversion.com. I want to get what the browser's url is but when I try things like Request.Url.ToString() Request.Url.OriginalUrl Request.Path Request.RawUrl Request.ServerVariables["SERVER_NAME"] it always comes back as www.englishversion.com. Is there a way that I can explicitly read the url from the browser? Thanks.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >