Search Results

Search found 17054 results on 683 pages for 'jms request reply'.

Page 108/683 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Uploading images from Flex to Rails using Paperclip

    - by 23tux
    Hi everyone, I'm looking for a way to upload images that were created in my flex app to rails. I've tried to use paperclip, but it don't seem to work. I've got this tutorial here: http://blog.alexonrails.net/?p=218 The problem is, that they are using a FileReference to browse for files on the clients computer. They call the .upload(...) function and send the data to the upload controller. But I'm using a URLLoader to upload a image, that is modified in my Flex-App. First, here is the code from the tutorial: private function selectHandler(event:Event):void { var vars:URLVariables = new URLVariables(); var request:URLRequest = new URLRequest(uri); request.method = URLRequestMethod.POST; vars.description = "My Description"; request.data = vars; var uploadDataFieldName:String = 'filemanager[file]'; fileReference.upload(request, uploadDataFieldName); } I don't know how to set that var uploadDataFieldName:String = 'filemanager[file]'; in a URLLoader. I've got the image data compressed as a JPEG in a ByteArray. It looks like this: public function savePicture():void { var filename:String = "blubblub.jpg"; var vars:URLVariables = new URLVariables(); vars.position = layoutView.currentPicPosition; vars.url = filename; vars.user_id = 1; vars.comic_id = 1; vars.file_content_type = "image/jpeg"; vars.file_file_name = filename; var rawBytes:ByteArray = new JPGEncoder(75).encode(bitmapdata); vars.picture = rawBytes; var request:URLRequest = new URLRequest(Data.SERVER_ADDR + "pictures/upload"); request.method = URLRequestMethod.POST; request.data = vars; var loader:URLLoader = new URLLoader(request); loader.addEventListener(Event.COMPLETE, savePictureHandler); loader.addEventListener(IOErrorEvent.IO_ERROR, errorHandlerUpload); loader.load(request); } If I set the var.picture URLVariable to the bytearray, then I get the error, that the upload is nil. Here is the Rails part: Picture-Model: require 'paperclip' class Picture < ActiveRecord::Base # relations from picture belongs_to :comic belongs_to :user has_many :picture_bubbles has_many :bubbles, :through => :picture_bubbles # attached file for picture upload -> with paperclip plugin has_attached_file :file, :path => "public/system/pictures/:basename.:extension" end and the picture controller with the upload function: class PicturesController < ApplicationController protect_from_forgery :except => :upload def upload @picture = Picture.new(params[:picture]) @picture.position = params[:position] @picture.comic_id = params[:comic_id] @picture.url = params[:url] @picture.user_id = params[:user_id] if @picture.save render(:nothing => true, :status => 200) else render(:nothing => true, :status => 500) end end end Does anyone know how to solve this problem? thx, tux

    Read the article

  • C# asp.net EF MVC postgresql error 23505: Duplicate key violates unique constraint

    - by user2721755
    EDIT: It was issue with database table - dropping and recreating table id column did the work. Problem solved. I'm trying to build web application, that is connected to postgresql database. Results are displaying in view with Kendo UI. When I'm trying to add new row (with Kendo UI 'Add new record' button), I get error 23505: 'Duplicate key violates unique constraint'. My guess is, that EF takes id to insert from the beginning, not the last one, because after 35 (it's number of rows in table) tries - and errors - adding works perfectly. Can someone help me to understand, what's wrong? Model: using System.ComponentModel.DataAnnotations; using System.ComponentModel.DataAnnotations.Schema; namespace MainConfigTest.Models { [Table("mainconfig", Schema = "public")] public class Mainconfig { [Column("id")] [Key] [Editable(false)] public int Id { get; set; } [Column("descr")] [Editable(true)] public string Descr { get; set; } [Column("hibversion")] [Required] [Editable(true)] public long Hibversion { get; set; } [Column("mckey")] [Required] [Editable(true)] public string Mckey { get; set; } [Column("valuexml")] [Editable(true)] public string Valuexml { get; set; } [Column("mcvalue")] [Editable(true)] public string Mcvalue { get; set; } } } Context: using System.Data.Entity; namespace MainConfigTest.Models { public class MainConfigContext : DbContext { public DbSet<Mainconfig> Mainconfig { get; set; } } } Controller: namespace MainConfigTest.Controllers { public class MainConfigController : Controller { #region Properties private Models.MainConfigContext db = new Models.MainConfigContext(); private string mainTitle = "Mainconfig (Kendo UI)"; #endregion #region Initialization public MainConfigController() { ViewBag.MainTitle = mainTitle; } #endregion #region Ajax [HttpGet] public JsonResult GetMainconfig() { int take = HttpContext.Request["take"] == null ? 5 : Convert.ToInt32(HttpContext.Request["take"]); int skip = HttpContext.Request["skip"] == null ? 0 : Convert.ToInt32(HttpContext.Request["skip"]); Array data = (from Models.Mainconfig c in db.Mainconfig select c).OrderBy(c => c.Id).ToArray().Skip(skip).Take(take).ToArray(); return Json(new Models.MainconfigResponse(data, db.Mainconfig.Count()), JsonRequestBehavior.AllowGet); } [HttpPost] public JsonResult Create() { try { Mainconfig itemToAdd = new Mainconfig() { Descr = Convert.ToString(HttpContext.Request["Descr"]), Hibversion = Convert.ToInt64(HttpContext.Request["Hibversion"]), Mckey = Convert.ToString(HttpContext.Request["Mckey"]), Valuexml = Convert.ToString(HttpContext.Request["Valuexml"]), Mcvalue = Convert.ToString(HttpContext.Request["Mcvalue"]) }; db.Mainconfig.Add(itemToAdd); db.SaveChanges(); return Json(new { Success = true }); } catch (InvalidOperationException ex) { return Json(new { Success = false, msg = ex }); } } //other methods } } Kendo UI script in view: <script type="text/javascript"> $(document).ready(function () { $("#config-grid").kendoGrid({ sortable: true, pageable: true, scrollable: false, toolbar: ["create"], editable: { mode: "popup" }, dataSource: { pageSize: 5, serverPaging: true, transport: { read: { url: '@Url.Action("GetMainconfig")', dataType: "json" }, update: { url: '@Url.Action("Update")', type: "Post", dataType: "json", complete: function (e) { $("#config-grid").data("kendoGrid").dataSource.read(); } }, destroy: { url: '@Url.Action("Delete")', type: "Post", dataType: "json" }, create: { url: '@Url.Action("Create")', type: "Post", dataType: "json", complete: function (e) { $("#config-grid").data("kendoGrid").dataSource.read(); } }, }, error: function (e) { if(e.Success == false) { this.cancelChanges(); } }, schema: { data: "Data", total: "Total", model: { id: "Id", fields: { Id: { editable: false, nullable: true }, Descr: { type: "string"}, Hibversion: { type: "number", validation: {required: true,}, }, Mckey: { type: "string", validation: { required: true, }, }, Valuexml:{ type: "string"}, Mcvalue: { type: "string" } } } } }, //end DataSource // generate columns etc. Mainconfig table structure: id serial NOT NULL, descr character varying(200), hibversion bigint NOT NULL, mckey character varying(100) NOT NULL, valuexml character varying(8000), mcvalue character varying(200), CONSTRAINT mainconfig_pkey PRIMARY KEY (id), CONSTRAINT mainconfig_mckey_key UNIQUE (mckey) Any help will be appreciated.

    Read the article

  • Serializing Complex ViewModel with Json.Net Destabilization Error on Latest Version

    - by dreadlocks1221
    I just added the latest Version of JSON.Net and I get the System.Security.VerificationException: Operation could destabilize the runtime error when trying to use a controller (while running the application). I read in other posts that this issue should have been fixed in release 6 but I still have the problem. I even added *Newtonsoft.* to the ignore modules in the intellitrace options, which seems to have suppressed the error, but the post will just run forever and not return anything. Any help I can get would be greatly appreciated. [HttpPost] public string GetComments(int ShowID, int Page) { int PageSize = 10; UserRepository UserRepo = new UserRepository(); ShowCommentViewModel viewModel = new ShowCommentViewModel(); IQueryable<Comment> CommentQuery = showRepository.GetShowComments(ShowID); var paginatedComments = new PaginatedList<Comment>(CommentQuery, Page, PageSize); viewModel.Comments = new List<CommentViewModel>(); foreach (Comment comment in CommentQuery.Take(10).ToList()) { CommentViewModel CommentModel = new CommentViewModel { Comment = comment, PostedBy = UserRepo.GetUserProfile(comment.UserID) }; IQueryable<Comment> ReplyQuery = showRepository.GetShowCommentReplies(comment.CommentID); int ReplyPage = 0; var paginatedReplies = new PaginatedList<Comment>(ReplyQuery, ReplyPage, 3); CommentModel.Replies = new List<ReplyModel>(); foreach (Comment reply in ReplyQuery.Take(3).ToList()) { ReplyModel rModel = new ReplyModel { Reply = reply, PostedBy = UserRepo.GetUserProfile(reply.UserID) }; CommentModel.Replies.Add(rModel); } CommentModel.RepliesNextPage = paginatedReplies.HasNextPage; CommentModel.RepliesPeviousPage = paginatedReplies.HasPreviousPage; CommentModel.RepliesTotalPages = paginatedReplies.TotalPages; CommentModel.RepliesPageIndex = paginatedReplies.PageIndex; CommentModel.RepliesTotalCount = paginatedReplies.TotalCount; viewModel.Comments.Add(CommentModel); } viewModel.CommentsNextPage = paginatedComments.HasNextPage; viewModel.CommentsPeviousPage = paginatedComments.HasPreviousPage; viewModel.CommentsTotalPages = paginatedComments.TotalPages; viewModel.CommentsPageIndex = paginatedComments.PageIndex; viewModel.CommentsTotalCount = paginatedComments.TotalCount; return JsonConvert.SerializeObject(viewModel, Formatting.Indented); }

    Read the article

  • Problem with socket communication between C# and Flex

    - by Chris Lee
    Hi all, I am implementing a simulated b/s stock data system. I am using flex and c# for client and server sides. I found flash has a security policy and I handled the policy-file-request in my server code. But seems it doesn't work, because the code jumped out at "socket.Receive(b)" after connection. I've tried sending message on client in the connection handler, in that case the server can receive correct message. But the auto-generated "policy-file-request" can never be received, and the client can get no data sending from server. Here I put my code snippet. my ActionScript code: public class StockClient extends Sprite { private var hostName:String = "192.168.84.103"; private var port:uint = 55555; private var socket:XMLSocket; public function StockClient() { socket = new XMLSocket(); configureListeners(socket); socket.connect(hostName, port); } public function send(data:Object) : void{ socket.send(data); } private function configureListeners(dispatcher:IEventDispatcher):void { dispatcher.addEventListener(Event.CLOSE, closeHandler); dispatcher.addEventListener(Event.CONNECT, connectHandler); dispatcher.addEventListener(IOErrorEvent.IO_ERROR, ioErrorHandler); dispatcher.addEventListener(ProgressEvent.PROGRESS, progressHandler); dispatcher.addEventListener(SecurityErrorEvent.SECURITY_ERROR, securityErrorHandler); dispatcher.addEventListener(ProgressEvent.SOCKET_DATA, dataHandler); } private function closeHandler(event:Event):void { trace("closeHandler: " + event); } private function connectHandler(event:Event):void { trace("connectHandler: " + event); //following testing message can be received, but client can't invoke data handler //send("<policy-file-request/>"); } private function dataHandler(event:ProgressEvent):void { //never fired trace("dataHandler: " + event); } private function ioErrorHandler(event:IOErrorEvent):void { trace("ioErrorHandler: " + event); } private function progressHandler(event:ProgressEvent):void { trace("progressHandler loaded:" + event.bytesLoaded + " total: " + event.bytesTotal); } private function securityErrorHandler(event:SecurityErrorEvent):void { trace("securityErrorHandler: " + event); } } my C# code: const int PORT_NUMBER = 55555; const String BEGIN_REQUEST = "begin"; const String END_REQUEST = "end"; const String POLICY_REQUEST = "<policy-file-request/>\u0000"; const String POLICY_FILE = "<?xml version=\"1.0\"?>\n" + "<!DOCTYPE cross-domain-policy SYSTEM \"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd\">\n" + "<cross-domain-policy> \n" + " <allow-access-from domain=\"*\" to-ports=\"55555\"/> \n" + "</cross-domain-policy>\u0000"; ................ private void startListening() { provider = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); provider.Bind(new IPEndPoint(IPAddress.Parse("192.168.84.103"), PORT_NUMBER)); provider.Listen(10); isListened = true; while (isListened) { Socket socket = provider.Accept(); Console.WriteLine("connect!"); byte[] b = new byte[1024]; int receiveLength = 0; try { // code jump out at this statement receiveLength = socket.Receive(b); } catch (Exception e) { Debug.WriteLine(e.ToString()); } String request = System.Text.Encoding.UTF8.GetString(b, 0, receiveLength); Console.WriteLine("request:"+request); if (request == POLICY_REQUEST) { socket.Send(Encoding.UTF8.GetBytes(POLICY_FILE)); Console.WriteLine("response:" + POLICY_FILE); } else if (request == END_REQUEST) { Dispose(socket); } else { StartSocket(socket); break; } } } Sorry for the long code, please someone help with it, thanks a million

    Read the article

  • How can I pass extra parameters to the routeMatch object?

    - by Marcos Garcia
    I'm trying to unit test a controller, but can't figure out how to pass some extra parameters to the routeMatch object. I followed the posts from tomoram at http://devblog.x2k.co.uk/unit-testing-a-zend-framework-2-controller/ and http://devblog.x2k.co.uk/getting-the-servicemanager-into-the-test-environment-and-dependency-injection/, but when I try to dispatch a request to /album/edit/1, for instance, it throws the following exception: Zend\Mvc\Exception\DomainException: Url plugin requires that controller event compose a router; none found Here is my PHPUnit Bootstrap: class Bootstrap { static $serviceManager; static $di; static public function go() { include 'init_autoloader.php'; $config = include 'config/application.config.php'; // append some testing configuration $config['module_listener_options']['config_static_paths'] = array(getcwd() . '/config/test.config.php'); // append some module-specific testing configuration if (file_exists(__DIR__ . '/config/test.config.php')) { $moduleConfig = include __DIR__ . '/config/test.config.php'; array_unshift($config['module_listener_options']['config_static_paths'], $moduleConfig); } $serviceManager = Application::init($config)->getServiceManager(); self::$serviceManager = $serviceManager; // Setup Di $di = new Di(); $di->instanceManager()->addTypePreference('Zend\ServiceManager\ServiceLocatorInterface', 'Zend\ServiceManager\ServiceManager'); $di->instanceManager()->addTypePreference('Zend\EventManager\EventManagerInterface', 'Zend\EventManager\EventManager'); $di->instanceManager()->addTypePreference('Zend\EventManager\SharedEventManagerInterface', 'Zend\EventManager\SharedEventManager'); self::$di = $di; } static public function getServiceManager() { return self::$serviceManager; } static public function getDi() { return self::$di; } } Bootstrap::go(); Basically, we are creating a Zend\Mvc\Application environment. My PHPUnit_Framework_TestCase is enclosed in a custom class, which goes like this: abstract class ControllerTestCase extends TestCase { /** * The ActionController we are testing * * @var Zend\Mvc\Controller\AbstractActionController */ protected $controller; /** * A request object * * @var Zend\Http\Request */ protected $request; /** * A response object * * @var Zend\Http\Response */ protected $response; /** * The matched route for the controller * * @var Zend\Mvc\Router\RouteMatch */ protected $routeMatch; /** * An MVC event to be assigned to the controller * * @var Zend\Mvc\MvcEvent */ protected $event; /** * The Controller fully qualified domain name, so each ControllerTestCase can create an instance * of the tested controller * * @var string */ protected $controllerFQDN; /** * The route to the controller, as defined in the configuration files * * @var string */ protected $controllerRoute; public function setup() { parent::setup(); $di = \Bootstrap::getDi(); // Create a Controller and set some properties $this->controller = $di->newInstance($this->controllerFQDN); $this->request = new Request(); $this->routeMatch = new RouteMatch(array('controller' => $this->controllerRoute)); $this->event = new MvcEvent(); $this->event->setRouteMatch($this->routeMatch); $this->controller->setEvent($this->event); $this->controller->setServiceLocator(\Bootstrap::getServiceManager()); } public function tearDown() { parent::tearDown(); unset($this->controller); unset($this->request); unset($this->routeMatch); unset($this->event); } } And we create a Controller instance and a Request with a RouteMatch. The code for the test: public function testEditActionWithGetRequest() { // Dispatch the edit action $this->routeMatch->setParam('action', 'edit'); $this->routeMatch->setParam('id', $album->id); $result = $this->controller->dispatch($this->request, $this->response); // rest of the code isn't executed } I'm not sure what I'm missing here. Can it be any configuration for the testing bootstrap? Or should I pass the parameters in some other way? Or am I forgetting to instantiate something?

    Read the article

  • reddit style voting with django

    - by dotty
    Hay i need to hand implemeneting a voting system into a model. I've had a huge helping hand from Mike DeSimone making this work in the first place, but i need to expand upon his work. Here is my current code View def show_game(request): game = Game.objects.get(pk=1) discussions = game.gamediscussion_set.filter(reply_to=None) d = { 'game':game, 'discussions':discussions } return render_to_response('show_game', d) Template <ul> {% for discussion in discussions %} {{ discussion.html }} {% endfor %} </ul> Model class GameDiscussion(models.Model): game = models.ForeignKey(Game) message = models.TextField() reply_to = models.ForeignKey('self', related_name='replies', null=True, blank=True) created_on = models.DateTimeField(blank=True, auto_now_add=True) userUpVotes = models.ManyToManyField(User, blank=True, related_name='threadUpVotes') userDownVotes = models.ManyToManyField(User, blank=True, related_name='threadDownVotes') def html(self): DiscussionTemplate = loader.get_template("inclusions/discussionTemplate") return DiscussionTemplate.render(Context({ 'discussion': self, 'replies': [reply.html() for reply in self.replies.all()] })) DiscussionTemplate <li> {{ discussion.message }} {% if replies %} <ul> {% for reply in replies %} {{ reply }} {% endfor %} </ul> {% endif %} </li> As you can see we have 2 fields userUpVotes and userDownVotes on the model, these will calculate how to order the discussions and replies. How would i implement these 2 fields to order the replies and discussions based on votes? Any help would be great!

    Read the article

  • Sending and receiving IM messages via controller in Rails

    - by Grnbeagle
    Hi, I need a way to handle XMPP communication in my Rails app. My requirements are: Keep an instance of XMPP client running and logged in as one specific user (my bot user) Trigger an event from a controller to send a message and wait for a reply. The message is sent to another machine equipped with a bot so that the reply is supposed to be returned quickly. I installed xmpp4r and backgrounDrb similar to what's described here, but backgrounDrb seems to have evolved and I couldn't get it to wait for a reply. If it has to happen asynchronously, I am willing to use a server-push technology to notify the browser when the reply arrives. To give you a better idea, here are snippets of my code: (In controller) class ServicesController < ApplicationController layout 'simple' def index render :text => "index" end def show @my_service = Service.find(params[:id]) worker = MiddleMan.worker(:jabber_agent_worker) worker.send_request(:arg => {:jid => "someuser@someserver", :cmd => "help"}) render :text => "testing" end end (In worker script) require 'xmpp4r' require 'logger' class JabberAgentWorker < BackgrounDRb::MetaWorker set_worker_name :jabber_agent_worker def create(args = nil) jid = Jabber::JID.new('myagent@myserver') @client = Jabber::Client.new(jid) @client.connect @client.auth('pass') @client.send(Jabber::Presence.new.set_show(:chat).set_status('BackgrounDRb')) @client.add_message_callback do |message| logger.info("**** messaged received: #{message}") # never reaches here end end def send_request(args = nil) to_jid = Jabber::JID.new(args[:jid]) message = Jabber::Message::new(to_jid, args[:cmd]).set_type(:normal).set_id('1') @client.send(message) end end If anyone can tell me any of the following, I'd much appreciate it: issue with my backgrounDrb usage other background process alternatives appropriate for XMPP interactions other ways of achieving this Thanks in advance.

    Read the article

  • Does QThread::sleep() require the event loop to be running?

    - by suszterpatt
    I have a simple client-server program written in Qt, where processes communicate using MPI. The basic design I'm trying to implement is the following: The first process (the "server") launches a GUI (derived from QMainWindow), which listens for messages from the clients (using repeat fire QTimers and asynchronous MPI receive calls), updates the GUI depending on what messages it receives, and sends a reply to every message. Every other process (the "clients") runs in an infinite loop, and all they are intended to do is send a message to the server process, receive the reply, go to sleep for a while, then wake up and repeat. Every process instantiates a single object derived from QThread, and calls its start() method. The run() method of these classes all look like this: from foo.cpp: void Foo::run() { while (true) { // Send message to the first process // Wait for a reply // Do uninteresting stuff with the reply sleep(3); // also tried QThread::sleep(3) } } In the client's code, there is no call to exec() anywhere, so no event loop should start. The problem is that the clients never wake up from sleeping (if I surround the sleep() call with two writes to a log file, only the first one is executed, control never reaches the second). Is this because I didn't start the event loop? And if so, what is the simplest way to achieve the desired functionality?

    Read the article

  • apache web server configuration problem

    - by mohit
    i want to have apache server to serve only /var/www/ directory now it serves all my files on system from directory "/" i tried to edit httpd.conf placed in /etc/apache2 and placed the folllowing content in it(intially it was empty) <Directory /> Options None AllowOverride None </Directory> DocumentRoot "/var/www" <Directory "/var/www"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> then saved it,restarted apache server put the location /var/www in the web browser address bar,still it shows the higher level directories too then i edited the file Default,Default-ssl in the sites-available folder repeated the same process still apache serves all files on my system 2.when i try to use the following command gedit httpd.conf I get the error gedit:2696): EggSMClient-WARNING **: Failed to connect to the session manager: None of the authentication protocols specified are supported GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://projects.gnome.org/gconf/ for information. (Details - 1: Failed to get connection to session: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.)

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • jsp error plz check this code why are not insert my data into database

    - by lekhni
    <%@ page language="java" import="java.util.*" pageEncoding="ISO-8859-1"%> <%@ page import="java.sql.*" %> <%@ page import="java.io.*"%> <%@ page import="java.io.File"%> <%@ page import="java.util.*"%> <%@ page import="java.sql.*"%> <%@ page import="java.sql.Blob"%> <%@ page import="java.sql.PreparedStatement"%> <%@ page import="java.sql.BatchUpdateException"%> <%@ page import="javax.servlet.*"%> <%@ page import="javax.servlet.http.*"%> <% String path = request.getContextPath(); String basePath = request.getScheme()+"://"+request.getServerName()+":"+request.getServerPort()+path+"/"; %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <base href="<%=basePath%>"> <title>My JSP 'p.jsp' starting page</title> <meta http-equiv="pragma" content="no-cache"> <meta http-equiv="cache-control" content="no-cache"> <meta http-equiv="expires" content="0"> <meta http-equiv="keywords" content="keyword1,keyword2,keyword3"> <meta http-equiv="description" content="This is my page"> <!-- <link rel="stylesheet" type="text/css" href="styles.css"> --> </head> <body> <% int activityId = Integer.parseInt(request.getParameter("activityId")); String name = request.getParameter("name"); String activityType = request.getParameter("activityType"); int parentId = Integer.parseInt(request.getParameter("parentId")); String description = request.getParameter("description"); %> <% Connection conn=null; PreparedStatement pstatement = null; try{ Class.forName("com.mysql.jdbc.Driver"); conn=DriverManager.getConnection("jdbc:mysql://localhost:3306/pol","root","1234"); Statement st=conn.createStatement(); String queryString = "INSERT INTO activity(activityId,name,activityType,parentId,description)" + " VALUES (?, ?, ?, ?, ?)"; pstatement = conn.prepareStatement(queryString); pstatement.setInt(1, activityId); pstatement.setString(2, name); pstatement.setString(3, activityType); pstatement.setInt(4, parentId); pstatement.setString(5, description); } catch (Exception ex) { out.println("Unable to connect to batabase."); } finally { pstatement.close(); conn.close(); } %> </body> </html>

    Read the article

  • PHP - Drilling down Data and Looping with Loops

    - by stogdilla
    I'm currently having difficulty finding a way of making my loops work. I have a table of data with 15 minute values. I need the data to pull up in a few different increments $filters=Array('Yrs','Qtr','Day','60','30','15'); I think I have a way of finding out what I need to be able to drill down to but the issue I'm having is after the first loop to cycle through all the Outter most values (ex: the user says they want to display by Hours, each hour should be able to have a "+" that will then add a new div to display the half hour data, then each half hour data have a "+" to display the 15 minute data upon request. Now I can just program the number of outputs for each value (6 different outputs) just in-case... but isn't there a way I can make it do the drill down for each one in a loop? so I only have to code one output once and have it just check if there are any more intervals after it and check for those? I'm sure I'm just overlooking some very simple way of doing this but my brain isn't being clever today. Sorry in advance if this is a simple solution. I guess the best way I could think of it as a reply on a form. How you would check to see if it's a reply of a reply, and then if that reply has any replys...etc for output. Can anyone help or at least point me in the right direction? Or am I stuck coding each possible check? Thanks in advance!

    Read the article

  • How to merge arraylist element ?

    - by tiendv
    I have a string example = " Can somebody provide an algorithm sample code in your reply ", after token and do some i want on string example i have arraylist like : ArrayList <token > arl = " "Can somebody provide ", "code in your ", "somebody provide an algorith", " in your reply" ) "Can somebody provide ", i know position start and end in string test : star = 1 end = 3 " code in your ", i know position stat = 7 end = 10, "somebody provide an algorith", i know position stat = 7 end = 10, "in your reply" i know position stat = 11 end = 14, we can see,some element in arl overlaping :"Can somebody provide "," code in your ","somebody provide an algorith". The problem here is how can i merge overlaping element to recived arraylist like ArrayList result ="" Can somebody provide an algorithm sample code","" in your reply""; Here my code : but it only merge fist elecment if check is overloaping public ArrayList<TextChunks> finalTextChunks(ArrayList<TextChunks> textchunkswithkeyword) { ArrayList<TextChunks > result = (ArrayList<TextChunks>) textchunkswithkeyword.clone(); //System.out.print(result.size()); int j; for(int i=0;i< result.size() ;i++) { int index = i; if(i+1>=result.size()){ break; } j=i+1; if(result.get(i).checkOverlapingTwoTextchunks(result.get(j))== true) { TextChunks temp = new TextChunks(); temp = handleOverlaping(textchunkswithkeyword.get(i),textchunkswithkeyword.get(j),resultSearchEngine); result.set(i, temp); result.remove(j); i = index; continue; } } return result; } Thanks in avadce

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • HttpContext.Items and Server.Transfer/Execute

    - by Rick Strahl
    A few days ago my buddy Ben Jones pointed out that he ran into a bug in the ScriptContainer control in the West Wind Web and Ajax Toolkit. The problem was basically that when a Server.Transfer call was applied the script container (and also various ClientScriptProxy script embedding routines) would potentially fail to load up the specified scripts. It turns out the problem is due to the fact that the various components in the toolkit use request specific singletons via a Current property. I use a static Current property tied to a Context.Items[] entry to handle this type of operation which looks something like this: /// <summary> /// Current instance of this class which should always be used to /// access this object. There are no public constructors to /// ensure the reference is used as a Singleton to further /// ensure that all scripts are written to the same clientscript /// manager. /// </summary> public static ClientScriptProxy Current { get { if (HttpContext.Current == null) return new ClientScriptProxy(); ClientScriptProxy proxy = null; if (HttpContext.Current.Items.Contains(STR_CONTEXTID)) proxy = HttpContext.Current.Items[STR_CONTEXTID] as ClientScriptProxy; else { proxy = new ClientScriptProxy(); HttpContext.Current.Items[STR_CONTEXTID] = proxy; } return proxy; } } The proxy is attached to a Context.Items[] item which makes the instance Request specific. This works perfectly fine in most situations EXCEPT when you’re dealing with Server.Transfer/Execute requests. Server.Transfer doesn’t cause Context.Items to be cleared so both the current transferred request and the original request’s Context.Items collection apply. For the ClientScriptProxy this causes a problem because script references are tracked on a per request basis in Context.Items to check for script duplication. Once a script is rendered an ID is written into the Context collection and so considered ‘rendered’: // No dupes - ref script include only once if (HttpContext.Current.Items.Contains( STR_SCRIPTITEM_IDENTITIFIER + fileId ) ) return; HttpContext.Current.Items.Add(STR_SCRIPTITEM_IDENTITIFIER + fileId, string.Empty); where the fileId is the script name or unique identifier. The problem is on the Transferred page the item will already exist in Context and so fail to render because it thinks the script has already rendered based on the Context item. Bummer. The workaround for this is simple once you know what’s going on, but in this case it was a bitch to track down because the context items are used in many places throughout this class. The trick is to determine when a request is transferred and then removing the specific keys. The first issue is to determine if a script is in a Trransfer or Execute call: if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) Context.Handler is the original handler and CurrentHandler is the actual currently executing handler that is running when a Transfer/Execute is active. You can also use Context.PreviousHandler to get the last handler and chain through the whole list of handlers applied if Transfer calls are nested (dog help us all for the person debugging that). For the ClientScriptProxy the full logic to check for a transfer and remove the code looks like this: /// <summary> /// Clears all the request specific context items which are script references /// and the script placement index. /// </summary> public void ClearContextItemsOnTransfer() { if (HttpContext.Current != null) { // Check for Server.Transfer/Execute calls - we need to clear out Context.Items if (HttpContext.Current.CurrentHandler != HttpContext.Current.Handler) { List<string> Keys = HttpContext.Current.Items.Keys.Cast<string>().Where(s => s.StartsWith(STR_SCRIPTITEM_IDENTITIFIER) || s == STR_ScriptResourceIndex).ToList(); foreach (string key in Keys) { HttpContext.Current.Items.Remove(key); } } } } along with a small update to the Current property getter that sets a global flag to indicate whether the request was transferred: if (!proxy.IsTransferred && HttpContext.Current.Handler != HttpContext.Current.CurrentHandler) { proxy.ClearContextItemsOnTransfer(); proxy.IsTransferred = true; } return proxy; I know this is pretty ugly, but it works and it’s actually minimal fuss without affecting the behavior of the rest of the class. Ben had a different solution that involved explicitly clearing out the Context items and replacing the collection with a manually maintained list of items which also works, but required changes through the code to make this work. In hindsight, it would have been better to use a single object that encapsulates all the ‘persisted’ values and store that object in Context instead of all these individual small morsels. Hindsight is always 20/20 though :-}. If possible use Page.Items ClientScriptProxy is a generic component that can be used from anywhere in ASP.NET, so there are various methods that are not Page specific on this component which is why I used Context.Items, rather than the Page.Items collection.Page.Items would be a better choice since it will sidestep the above Server.Transfer nightmares as the Page is reloaded completely and so any new Page gets a new Items collection. No fuss there. So for the ScriptContainer control, which has to live on the page the behavior is a little different. It is attached to Page.Items (since it’s a control): /// <summary> /// Returns a current instance of this control if an instance /// is already loaded on the page. Otherwise a new instance is /// created, added to the Form and returned. /// /// It's important this function is not called too early in the /// page cycle - it should not be called before Page.OnInit(). /// /// This property is the preferred way to get a reference to a /// ScriptContainer control that is either already on a page /// or needs to be created. Controls in particular should always /// use this property. /// </summary> public static ScriptContainer Current { get { // We need a context for this to work! if (HttpContext.Current == null) return null; Page page = HttpContext.Current.CurrentHandler as Page; if (page == null) throw new InvalidOperationException(Resources.ERROR_ScriptContainer_OnlyWorks_With_PageBasedHandlers); ScriptContainer ctl = null; // Retrieve the current instance ctl = page.Items[STR_CONTEXTID] as ScriptContainer; if (ctl != null) return ctl; ctl = new ScriptContainer(); page.Form.Controls.Add(ctl); return ctl; } } The biggest issue with this approach is that you have to explicitly retrieve the page in the static Current property. Notice again the use of CurrentHandler (rather than Handler which was my original implementation) to ensure you get the latest page including the one that Server.Transfer fired. Server.Transfer and Server.Execute are Evil All that said – this fix is probably for the 2 people who are crazy enough to rely on Server.Transfer/Execute. :-} There are so many weird behavior problems with these commands that I avoid them at all costs. I don’t think I have a single application that uses either of these commands… Related Resources Full source of ClientScriptProxy.cs (repository) Part of the West Wind Web Toolkit Static Singletons for ASP.NET Controls Post © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • AutoMapper MappingFunction from Source Type of NameValueCollection

    - by REA_ANDREW
    I have had a situation arise today where I need to construct a complex type from a source of a NameValueCollection.  A little while back I submitted a patch for the Agatha Project to include REST (JSON and XML) support for the service contract.  I realized today that as useful as it is, it did not actually support true REST conformance, as REST should support GET so that you can use JSONP from JavaScript directly meaning you can query cross domain services.  My original implementation for POX and JSON used the POST method and this immediately rules out JSONP as from reading, JSONP only works with GET Requests. This then raised another issue.  The current operation contract of Agatha and one of its main benefits is that you can supply an array of Request objects in a single request, limiting the about of server requests you need to make.  Now, at the present time I am thinking that this will not be the case for the REST imlementation but will yield the benefits of the fact that : The same Request objects can be used for SOAP and RST (POX, JSON) The construct of the JavaScript functions will be simpler and more readable It will enable the use of JSONP for cross domain REST Services The current contract for the Agatha WcfRequestProcessor is at time of writing the following: [ServiceContract] public interface IWcfRequestProcessor { [OperationContract(Name = "ProcessRequests")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] Response[] Process(params Request[] requests); [OperationContract(Name = "ProcessOneWayRequests", IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] void ProcessOneWayRequests(params OneWayRequest[] requests); }   My current proposed solution, and at the very early stages of my concept is as follows: [ServiceContract] public interface IWcfRestJsonRequestProcessor { [OperationContract(Name="process")] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [TransactionFlow(TransactionFlowOption.Allowed)] [WebGet(UriTemplate = "process/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] Response[] Process(string name, NameValueCollection parameters); [OperationContract(Name="processoneway",IsOneWay = true)] [ServiceKnownType("GetKnownTypes", typeof(KnownTypeProvider))] [WebGet(UriTemplate = "process-one-way/{name}/{*parameters}", BodyStyle = WebMessageBodyStyle.WrappedResponse, ResponseFormat = WebMessageFormat.Json)] void ProcessOneWayRequests(string name, NameValueCollection parameters); }   Now this part I have not yet implemented, it is the preliminart step which I have developed which will allow me to take the name of the Request Type and the NameValueCollection and construct the complex type which is that of the Request which I can then supply to a nested instance of the original IWcfRequestProcessor  and work as it should normally.  To give an example of some of the urls which you I envisage with this method are: http://www.url.com/service.svc/json/process/getweather/?location=london http://www.url.com/service.svc/json/process/getproductsbycategory/?categoryid=1 http://www.url.om/service.svc/json/process/sayhello/?name=andy Another reason why my direction has gone to a single request for the REST implementation is because of restrictions which are imposed by browsers on the length of the url.  From what I have read this is on average 2000 characters.  I think that this is a very acceptable usage limit in the context of using 1 request, but I do not think this is acceptable for accommodating multiple requests chained together.  I would love to be corrected on that one, I really would but unfortunately from what I have read I have come to the conclusion that this is not the case. The mapping function So, as I say this is just the first pass I have made at this, and I am not overly happy with the try catch for detecting types without default constructors.  I know there is a better way but for the minute, it escapes me.  I would also like to know the correct way for adding mapping functions and not using the anonymous way that I have used.  To achieve this I have used recursion which I am sure is what other mapping function use. As you do have to go as deep as the complex type is. public static object RecurseType(NameValueCollection collection, Type type, string prefix) { try { var returnObject = Activator.CreateInstance(type); foreach (var property in type.GetProperties()) { foreach (var key in collection.AllKeys) { if (String.IsNullOrEmpty(prefix) || key.Length > prefix.Length) { var propertyNameToMatch = String.IsNullOrEmpty(prefix) ? key : key.Substring(property.Name.IndexOf(prefix) + prefix.Length + 1); if (property.Name == propertyNameToMatch) { property.SetValue(returnObject, Convert.ChangeType(collection.Get(key), property.PropertyType), null); } else if(property.GetValue(returnObject,null) == null) { property.SetValue(returnObject, RecurseType(collection, property.PropertyType, String.Concat(prefix, property.PropertyType.Name)), null); } } } } return returnObject; } catch (MissingMethodException) { //Quite a blunt way of dealing with Types without default constructor return null; } }   Another thing is performance, I have not measured this in anyway, it is as I say the first pass, so I hope this can be the start of a more perfected implementation.  I tested this out with a complex type of three levels, there is no intended logical meaning to the properties, they are simply for the purposes of example.  You could call this a spiking session, as from here on in, now I know what I am building I would take a more TDD approach.  OK, purists, why did I not do this from the start, well I didn’t, this was a brain dump and now I know what I am building I can. The console test and how I used with AutoMapper is as follows: static void Main(string[] args) { var collection = new NameValueCollection(); collection.Add("Name", "Andrew Rea"); collection.Add("Number", "1"); collection.Add("AddressLine1", "123 Street"); collection.Add("AddressNumber", "2"); collection.Add("AddressPostCodeCountry", "United Kingdom"); collection.Add("AddressPostCodeNumber", "3"); AutoMapper.Mapper.CreateMap<NameValueCollection, Person>() .ConvertUsing(x => { return(Person) RecurseType(x, typeof(Person), null); }); var person = AutoMapper.Mapper.Map<NameValueCollection, Person>(collection); Console.WriteLine(person.Name); Console.WriteLine(person.Number); Console.WriteLine(person.Address.Line1); Console.WriteLine(person.Address.Number); Console.WriteLine(person.Address.PostCode.Country); Console.WriteLine(person.Address.PostCode.Number); Console.ReadLine(); }   Notice the convention that I am using and that this method requires you do use.  Each property is prefixed with the constructed name of its parents combined.  This is the convention used by AutoMapper and it makes sense. I can also think of other uses for this including using with ASP.NET MVC ModelBinders for creating a complex type from the QueryString which is itself is a NameValueCollection. Hope this is of some help to people and I would welcome any code reviews you could give me. References: Agatha : http://code.google.com/p/agatha-rrsl/ AutoMapper : http://automapper.codeplex.com/   Cheers for now, Andrew   P.S. I will have the proposed solution for a more complete REST implementation for AGATHA very soon. 

    Read the article

  • How can unrealscript halt event handler execution after an arbitrary number of lines with no return or error?

    - by Dan Cowell
    I have created a class that extends TcpLink and is instantiated in a custom Kismet Sequence Action. It is being instantiated correctly and is making the GET HTTP request that I need it to (I have checked my access log in apache) and Apache is responding to the request with the appropriate content. The problem I have is that I'm using the event receive mode and it appears that somehow the handler for the Opened event is halted after a specific number of lines of code have executed. Here is my code for the Opened event: event Opened() { // A connection was established WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } As you can see, a number of the Broadcast calls have been commented out. Initially, only the lines up to the Broadcast containing "[DNomad_TcpLinkClient] Sent request: " were being executed and none of the Broadcasts were commented out. After commenting out that line, the next Broadcast was successful and so on and so forth. As a test, I commented out the very first Broadcast to see if the connection closing had any effect: // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); Upon doing that, an additional Broadcast at the end of the function executed. Thus the inference that there is an upper limit to the number of lines executed. Additionally, my ReceivedText handler is never called, despite Apache returning the correct HTTP 200 response with a body. My working hypothesis is that somehow after the Sequence Action finishes executing the garbage collector cleans up the TcpLinkClient instance. My biggest source of confusion with that is how on earth it does it during the execution of an event handler. Has anyone ever seen anything like this before? My full TcpLinkClient class is below: /* * TcpLinkClient based on an example usage of the TcpLink class by Michiel 'elmuerte' Hendriks for Epic Games, Inc. * */ class DNomad_TcpLinkClient extends TcpLink; var PlayerController PC; var string TargetHost; var int TargetPort; var string path; var string requesttext; var string userId; var string apartmentId; var string statusCode; var string responseData; event PostBeginPlay() { super.PostBeginPlay(); } function DoTcpLinkRequest(string uid, string id) //removes having to send a host { userId = uid; apartmentId = id; Resolve(targethost); } function string GetStatus() { return statusCode; } event Resolved( IpAddr Addr ) { // The hostname was resolved succefully WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] "$TargetHost$" resolved to "$ IpAddrToString(Addr)); // Make sure the correct remote port is set, resolving doesn't set // the port value of the IpAddr structure Addr.Port = TargetPort; //dont comment out this log because it rungs the function bindport WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Bound to port: "$ BindPort() ); if (!Open(Addr)) { WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Open failed"); } } event ResolveFailed() { WorldInfo.Game.Broadcast(self, "[TcpLinkClient] Unable to resolve "$TargetHost); // You could retry resolving here if you have an alternative // remote host. //send failed message to scaleform UI //JunHud(JunPlayerController(PC).myHUD).JunMovie.CallSetHTML("Failed"); } event Opened() { // A connection was established //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] event opened"); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sending simple HTTP query"); //The HTTP GET request //char(13) and char(10) are carrage returns and new lines requesttext = "userId="$userId$"&apartmentId="$apartmentId; SendText("GET /"$path$"?"$requesttext$" HTTP/1.0"); SendText(chr(13)$chr(10)); SendText("Host: "$TargetHost); SendText(chr(13)$chr(10)); SendText("Connection: Close"); SendText(chr(13)$chr(10)$chr(13)$chr(10)); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Sent request: "$requesttext); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] end HTTP query"); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkState: "$LinkState); //WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] LinkMode: "$LinkMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] ReceiveMode: "$ReceiveMode); WorldInfo.Game.Broadcast(self, "[DNomad_TcpLinkClient] Error: "$string(GetLastError())); } event Closed() { // In this case the remote client should have automatically closed // the connection, because we requested it in the HTTP request. WorldInfo.Game.Broadcast(self, "Connection closed."); // After the connection was closed we could establish a new // connection using the same TcpLink instance. } event ReceivedText( string Text ) { WorldInfo.Game.Broadcast(self, "Received Text: "$Text); //we dont want the header info, so we split the string after two new lines Text = Split(Text, chr(13)$chr(10)$chr(13)$chr(10), true); WorldInfo.Game.Broadcast(self, "Split Text: "$Text); statusCode = Text; } event ReceivedLine( string Line ) { WorldInfo.Game.Broadcast(self, "Received Line: "$Line); } event ReceivedBinary( int Count, byte B[255] ) { WorldInfo.Game.Broadcast(self, "Received Binary of length: "$Count); } defaultproperties { TargetHost="127.0.0.1" TargetPort=80 //default for HTTP LinkMode=MODE_Text ReceiveMode=RMODE_Event path = "dnomad/datafeed.php" userId = "0"; apartmentId = "0"; statusCode = ""; send = false; }

    Read the article

  • glassfish - Unknown error when trying port 4848

    - by Majid Azimi
    I'm installing glassfish 3.1 on Windows XP service pack 3. but in configuration step it gives this error: PERFORMING THE REQUIRED CONFIGURATIONS ______________________________________ CREATING DOMAIN _______________ Executing command :C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile C:\DOCUME~1\MAJIDA~1\LOCALS~1\Temp\glassfish-3.1-windows-ml.exe6\asadminTmp1079044298673991344.tmp create-domain --savelogin --checkports=false --adminport 4848 --instanceport 8080 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1 C:\glassfish3\glassfish\bin\asadmin.bat --user admin --passwordfile C:\DOCUME~1\MAJIDA~1\LOCALS~1\Temp\glassfish-3.1-windows-ml.exe6\asadminTmp5898014821156752751.tmp create-domain --savelogin --checkports=false --adminport 4848 --instanceport 8080 --domainproperties=jms.port=7676:domain.jmxPort=8686:orb.listener.port=3700:http.ssl.port=8181:orb.ssl.port=3820:orb.mutualauth.port=3920 domain1Unknown error when trying port 4848. Try a different port number. Command create-domain failed. CLI130 Could not create domain, domain1 I change 4848 to any other port. but it doesn't work. firewall is completely disabled. Could anyone help?

    Read the article

  • Disabling certain JBoss ports

    - by Rich
    We are trying to configure JBoss 5.1.0 to be as lightweight and as secure as possible. One of the parts of this process is to identify and close any ports we do not need. Three ports that we have outstanding but don't believe we need are: 4457 - bisocket 4712 - JBossTS Recovery Manager 4713 - JBossTS Transaction Status Manager We don't think we need any of these features (but could be wrong). Bisocket seems to be a way for JMS clients behind a firewall to communicate with JBoss. We hardly use JMS now and when we do, it is very unlikely that we will need this firewall traversing ability. I am less sure about whether we need the two JBossTS ports - I am guessing these are used in a clustered environment - we aren't clustered. So my question is, how do we disable these ports (and associated processes where possible), or if we need these ports, why do we need to keep them open?

    Read the article

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Figuring out the IIS Version for a given OS in .NET Code

    - by Rick Strahl
    Here's an odd requirement: I need to figure out what version of IIS is available on a given machine in order to take specific configuration actions when installing an IIS based application. I build several configuration tools for application configuration and installation and depending on which version of IIS is available on IIS different configuration paths are taken. For example, when dealing with XP machine you can't set up an Application Pool for an application because XP (IIS 5.1) didn't support Application pools. Configuring 32 and 64 bit settings are easy in IIS 7 but this didn't work in prior versions and so on. Along the same lines I saw a question on the AspInsiders list today, regarding a similar issue where somebody needed to know the IIS version as part of an ASP.NET application prior to when the Request object is available. So it's useful to know which version of IIS you can possibly expect. This should be easy right? But it turns there's no real easy way to detect IIS on a machine. There's no registry key that gives you the full version number - you can detect installation but not which version is installed. The easiest way: Request.ServerVariables["SERVER_SOFTWARE"] The easiest way to determine IIS version number is if you are already running inside of ASP.NET and you are inside of an ASP.NET request. You can look at Request.ServerVariables["SERVER_SOFTWARE"] to get a string like Microsoft-IIS/7.5 returned to you. It's a cinch to parse this to retrieve the version number. This works in the limited scenario where you need to know the version number inside of a running ASP.NET application. Unfortunately this is not a likely use case, since most times when you need to know a specific version of IIS when you are configuring or installing your application. The messy way: Match Windows OS Versions to IIS Versions Since Version 5.x of IIS versions of IIS have always been tied very closely to the Operating System. Meaning the only way to get a specific version of IIS was through the OS - you couldn't install another version of IIS on the given OS. Microsoft has a page that describes the OS version to IIS version relationship here: http://support.microsoft.com/kb/224609 In .NET you can then sniff the OS version and based on that return the IIS version. The following is a small utility function that accomplishes the task of returning an IIS version number for a given OS: /// <summary> /// Returns the IIS version for the given Operating System. /// Note this routine doesn't check to see if IIS is installed /// it just returns the version of IIS that should run on the OS. /// /// Returns the value from Request.ServerVariables["Server_Software"] /// if available. Otherwise uses OS sniffing to determine OS version /// and returns IIS version instead. /// </summary> /// <returns>version number or -1 </returns> public static decimal GetIisVersion() { // if running inside of IIS parse the SERVER_SOFTWARE key // This would be most reliable if (HttpContext.Current != null && HttpContext.Current.Request != null) { string os = HttpContext.Current.Request.ServerVariables["SERVER_SOFTWARE"]; if (!string.IsNullOrEmpty(os)) { //Microsoft-IIS/7.5 int dash = os.LastIndexOf("/"); if (dash > 0) { decimal iisVer = 0M; if (Decimal.TryParse(os.Substring(dash + 1), out iisVer)) return iisVer; } } } decimal osVer = (decimal) Environment.OSVersion.Version.Major + ((decimal) Environment.OSVersion.Version.MajorRevision / 10); // Windows 7 and Win2008 R2 if (osVer == 6.1M) return 7.5M; // Windows Vista and Windows 2008 else if (osVer == 6.0M) return 7.0M; // Windows 2003 and XP 64 bit else if (osVer == 5.2M) return 6.0M; // Windows XP else if (osVer == 5.1M) return 5.1M; // Windows 2000 else if (osVer == 5.0M) return 5.0M; // error result return -1M; } } Talk about a brute force apporach, but it works. This code goes only back to IIS 5 - anything before that is not something you possibly would want to have running. :-) Note that this is updated through Windows 7/Windows Server 2008 R2. Later versions will need to be added as needed. Anybody know what the Windows Version number of Windows 8 is?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  IIS   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Get the currently saved object in a view in Django

    - by mridang
    Hi, I has a Django view which is accessed through an AJAX call. It's a pretty simple one — all it does is simply pass the request to a form object and save the data. Here's a snippet from my view: form = AddSiteForm(request.user, request.POST) if form.is_valid(): obj = form.save(commit=False) obj.user = request.user obj.save() data['status'] = 'success' data['html'] = render_to_string('site.html', locals(), context_instance=RequestContext(request)) return HttpResponse(simplejson.dumps(data), mimetype='application/json') How do I get the currently saved object (including the internally generated id column) and pass it to the template? Any help guys? Mridang

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >