Search Results

Search found 22717 results on 909 pages for 'load operation'.

Page 150/909 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Creating the Business Card Request InfoPath Form

    - by JKenderdine
    Business Card Request Demo Files Back in January I spoke at SharePoint Saturday Virginia Beach about InfoPath forms and Web Part deployment.  Below is some of the information and details regarding the form I created for the session.  There are many blogs and Microsoft articles on how to create a basic form so I won’t repeat that information here.   This blog will just explain a few of the options I chose when creating the solutions for SPS Virginia Beach.  The above link contains the zipped package files of the two InfoPath forms(no code solution and coded solution), the list template for the Location list I used, and the PowerPoint deck.  If you plan to use these templates, you will need to update the forms to work within your own environments (change data connections, code links, etc.).  Also, you must have the SharePoint Enterprise version, with InfoPath Services configured in order to use the Web Browser enabled forms. So what are the requirements for this template? Business Card Request Form Template Design Plan: Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Show and hide fields as necessary for requirements Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. Browser based form integrated into SharePoint team site Submitted directly to form library The base form was created using the blank template.  The table and rows were added using Insert tab and selecting Custom Table.  The use of tables is a great way to make sure everything lines up.  You do have to split the tables from time to time.  If you’ve ever split cells and then tried to re-align one to find that you impacted the others, you know why.  Here is what the base form looks like in InfoPath.   Show and hide fields as necessary for requirements You will notice I also used Sections within the form.  These show or hide depending on options selected or whether or not fields are blank.  This is a great way to prevent your users from feeling overwhelmed with a large form (this one wouldn’t apply).  Although not used in this one, you can also use various views with a tab interface.  I’ll show that in another post. Gather user information and requirements for card Pull in as much user information as possible. Use data from the user profile web services as a data source Utilizing rules you can load data when the form initiates (Data tab, Form Load).  Anything you can automate is always appreciated by the user as that is data they don’t have to enter.  For example, loading their user id or other user information on load: Always keep in mind though how much data you load and the method for loading that data (through rules, code, etc.).  They have an impact on form performance.  The form will take longer to load if you bring in a ton of data from external sources.  Laura Rogers has a great blog post on using the User Information List to load user information.   If the user has logged into SharePoint, then this can be used quite effectively and without a huge performance hit.   What I have found is that using the User Profile service via code behind or the Web Service “GetUserProfileByName” (as above) can take more time to load the user data.  Just food for thought. You must add the data connection in order for the above rules to work.  You can connect to the data connection through the Data tab, Data Connections or select Manage Data Connections link which appears under the main data source.  The data connections can be SharePoint lists or libraries, SQL data tables, XML files, etc.  Create multiple views – one for those submitting the form and Another view for the executive assistants placing the orders. You can also create multiple views for the users to enhance their experience.  Once they’ve entered the information and submitted their request for business cards, they don’t really need to see the main data input screen any more.  They just need to view what they entered. From the Page Design tab, select New View and give the view a name.  To review the existing views, click the down arrow under View: The ReviewView shows just what the user needs and nothing more: Once you have everything configured, the form should be tested within a Test SharePoint environment before final deployment to production.  This validates you don’t have any rules or code that could impact the server negatively. Submitted directly to form library   You will need to know the form library that you will be submitting to when publishing the template.  Configure the Submit data connection to connect to this library.  There is already one configured in the sample,  but it will need to be updated to your environment prior to publishing. The Design template is different from the Published template.  While both have the .XSN extension, the published template contains all the “package” information for the form.  The published form is what is loaded into Central Admin, not the design template. Browser based form integrated into SharePoint team site In Central Admin, under General Settings, select Manage Form Templates.  Upload the published form template and Activate it to a site collection. Now it is available as a content type to select in the form library.  Some documentation on publishing form templates:  Technet – Manage administrator approved form templates And that’s all our base requirements.  Hope this helps to give a good start.

    Read the article

  • Managing text-maps in a 2D array on to be painted on HTML5 Canvas

    - by weka
    So, I'm making a HTML5 RPG just for fun. The map is a <canvas> (512px width, 352px height | 16 tiles across, 11 tiles top to bottom). I want to know if there's a more efficient way to paint the <canvas>. Here's how I have it right now. How tiles are loaded and painted on map The map is being painted by tiles (32x32) using the Image() piece. The image files are loaded through a simple for loop and put into an array called tiles[] to be PAINTED on using drawImage(). First, we load the tiles... and here's how it's being done: // SET UP THE & DRAW THE MAP TILES tiles = []; var loadedImagesCount = 0; for (x = 0; x <= NUM_OF_TILES; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "js/tiles/t" + x + ".png"; imageObj.onload = function () { console.log("Added tile ... " + loadedImagesCount); loadedImagesCount++; if (loadedImagesCount == NUM_OF_TILES) { // Onces all tiles are loaded ... // We paint the map for (y = 0; y <= 15; y++) { for (x = 0; x <= 10; x++) { theX = x * 32; theY = y * 32; context.drawImage(tiles[5], theY, theX, 32, 32); } } } }; tiles.push(imageObj); } Naturally, when a player starts a game it loads the map they last left off. But for here, it an all-grass map. Right now, the maps use 2D arrays. Here's an example map. [[4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 1, 1, 1, 1], [1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 11, 11, 11, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 1, 1, 1, 1, 1, 1, 1, 13, 13, 13, 13, 13, 1], [1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1, 1, 1]]; I get different maps using a simple if structure. Once the 2d array above is return, the corresponding number in each array will be painted according to Image() stored inside tile[]. Then drawImage() will occur and paint according to the x and y and times it by 32 to paint on the correct x-y coordinate. How multiple map switching occurs With my game, maps have five things to keep track of: currentID, leftID, rightID, upID, and bottomID. currentID: The current ID of the map you are on. leftID: What ID of currentID to load when you exit on the left of current map. rightID: What ID of currentID to load when you exit on the right of current map. downID: What ID of currentID to load when you exit on the bottom of current map. upID: What ID of currentID to load when you exit on the top of current map. Something to note: If either leftID, rightID, upID, or bottomID are NOT specific, that means they are a 0. That means they cannot leave that side of the map. It is merely an invisible blockade. So, once a person exits a side of the map, depending on where they exited... for example if they exited on the bottom, bottomID will the number of the map to load and thus be painted on the map. Here's a representational .GIF to help you better visualize: As you can see, sooner or later, with many maps I will be dealing with many IDs. And that can possibly get a little confusing and hectic. The obvious pros is that it load 176 tiles at a time, refresh a small 512x352 canvas, and handles one map at time. The con is that the MAP ids, when dealing with many maps, may get confusing at times. My question Is this an efficient way to store maps (given the usage of tiles), or is there a better way to handle maps? I was thinking along the lines of a giant map. The map-size is big and it's all one 2D array. The viewport, however, is still 512x352 pixels. Here's another .gif I made (for this question) to help visualize: Sorry if you cannot understand my English. Please ask anything you have trouble understanding. Hopefully, I made it clear. Thanks.

    Read the article

  • Using SQLite from PowerShell on Windows 7x64?

    - by jas
    I'm having a difficult time trying to load System.Data.SQLite.dll from PowerShell in Windows 7 x64. # x64 [void][System.Reflection.Assembly]::LoadFrom("C:\projects\PSScripts\lib\System.Data.SQLite.x64.DLL") # x86 #[void][System.Reflection.Assembly]::LoadFrom("C:\projects\PSScripts\lib\System.Data.SQLite.DLL") $conn = New-Object -TypeName System.Data.SQLite.SQLiteConnection $conn.ConnectionString = "Data Source=C:\temp\PSData.db" $conn.Open() $command = $conn.CreateCommand() $command.CommandText = "select DATETIME('NOW') as now, 'Bar' as Foo" $adapter = New-Object -TypeName System.Data.SQLite.SQLiteDataAdapter $command $dataset = New-Object System.Data.DataSet [void]$adapter.Fill($dataset) Trying to open the connection with the x64 assembly results in: Exception calling "Open" with "0" argument(s): "An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)" Trying to load the x86 assembly results in: Exception calling "LoadFrom" with "1" argument(s): "Could not load file or assembly 'file:///C:\projects\PSScripts\lib\System.Data.SQLite.DLL' or one of its dependencies. An attempt was made to load a program with an incorrect format." Any thoughts or ideas?

    Read the article

  • Select list value undefined in $(document).ready

    - by C. Ross
    I have the following code, which I want to load values on a selection change, and also do the selection load initially (since FF 'saves' the last value of the drop down under certain circumstances). The select part of the function works correctly, but for some reason when calling load2 directly the value of $('#select1').value is undefined, even though when I check the DOM in Firebug right after load select1.value has a value. How can I run the load2 function when select1.value is ready? $(document).ready(function() { //Setup change hook $('#select1').change(function(event) { //Remove the old options right away $('#select2').find('option').remove(); //Load the new options load2(this.value); }); //Do load for current value load2($('#select1').value); });

    Read the article

  • passing multiple queries to view with codeigniter

    - by LvS
    I am trying to build a forum with Codeigniter. So far i have the forums themselves displayed and the threads displayed, based on the creating dynamic news tutorial. But that is 2 different pages, i need to obviously display them into one page, like this: Forum 1 - thread 1 - thread 2 - thread 3 Forum 2 - thread 1 - thread 2 etc. And then the next step is obviously to display all the posts in a thread. Most likely with some pagination going on. But that is for later. For now i have the forum controller (slimmed version): <?php class Forum extends CI_Controller { public function __construct() { parent::__construct(); $this->load->model('forum_model'); $this->lang->load('forum'); $this->lang->load('dutch'); } public function index() { $data['forums'] = $this->forum_model->get_forums(); $data['title'] = $this->lang->line('title'); $data['view'] = $this->lang->line('view'); $this->load->view('templates/header', $data); $this->load->view('forum/index', $data); $this->load->view('templates/footer'); } public function view($slug) { $data['forum_item'] = $this->forum_model->get_forums($slug); if (empty($data['forum_item'])) { show_404(); } $data['title'] = $data['forum_item']['title']; $this->load->view('templates/header', $data); $this->load->view('forum/view', $data); $this->load->view('templates/footer'); } } ?> And the forum_model (also slimmed down) <?php class Forum_model extends CI_Model { public function __construct() { $this->load->database(); } public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } public function get_threads($forumid, $limit, $offset) { $query = $this->db->get_where('thread', array('forumid', $forumid), $limit, $offset); return $query->result_array(); } } ?> And the view file <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <p><a href="forum/<?php echo $forum_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> Now that last one, i would like to have something like this: <?php foreach ($forums as $forum_item): ?> <h2><?=$forum_item['title']?></h2> <div id="main"> <?=$forum_item['description']?> </div> <?php foreach ($threads as $thread_item): ?> <h2><?php echo $thread_item['title'] ?></h2> <p><a href="thread/<?php echo $thread_item['slug'] ?>"><?=$view?></a></p> <?php endforeach ?> <?php endforeach ?> But the question is, how do i get the model to return like a double query to the view, so that it contains both the forums and the threads within each forum. I tried to make a foreach loop in the get_forum function, but when i do this: public function get_forums($slug = FALSE) { if ($slug === FALSE) { $query= $this->db->get('forum'); foreach ($query->row_array() as $forum_item) { $thread_query=$this->get_threads($forum_item->forumid, 50, 0); } return $query->result_array(); } $query = $this->db->get_where('forum', array('slug' => $slug)); return $query->row_array(); } i get the error A PHP Error was encountered Severity: Notice Message: Trying to get property of non-object Filename: models/forum_model.php Line Number: 16 I hope anyone has some good tips, thanks! Lenny *EDIT*** Thanks for the feedback. I have been puzzling and this seems to work now :) $query= $this->db->get('forum'); foreach ($query->result() as $forum_item) { $forum[$forum_item->forumid]['title']=$forum_item->title; $thread_query=$this->db->get_where('thread', array('forumid' => $forum_item->forumid), 20, 0); foreach ($thread_query->result() as $thread_item) { $forum[$forum_item->forumid]['thread'][]=$thread_item->title; } } return $forum; } What is now next, is how to display this multidimensional array in the view, with foreach statements.... Any suggestions ? Thanks

    Read the article

  • Help with Magento and related products

    - by Anthony
    I have a customer product page that literally lives beside the catalog/product/view.phtml page. It's basically identical to that page with a few small exceptions. It's basically a 'product of the day' type page so I can't combine it with the regular product page since I have to fetch the data from the DB and perform a load to get the product information $_product = Mage::getModel('catalog/product')->load($row['productid']); To make a long story short, everything works (including all children html blocks) with the singular exception of the related products. After the load I save the product into the registry with Mage::register('product', $_product); and then attempt to load the related products with: echo $this->getLayout()->createBlock('catalog/product_view')->setTemplate('catalog/product/list/related.phtml')->toHtml();` All of which give back the error: Fatal error: Call to a member function getSize() on a non-object in catalog/product/list/related.phtml on line 29`, and line 29 is <?php if($this->getItems()->getSize()): ?>`. Any help getting the relateds to load would be appreicated.

    Read the article

  • Flash AS3 for loop query

    - by jimbo
    I was hoping for a way that I could save on code by creating a loop for a few lines of code. Let me explain a little, with-out loop: icon1.button.iconLoad.load(new URLRequest("icons/icon1.jpg")); icon2.button.iconLoad.load(new URLRequest("icons/icon2.jpg")); icon3.button.iconLoad.load(new URLRequest("icons/icon3.jpg")); icon4.button.iconLoad.load(new URLRequest("icons/icon4.jpg")); etc... But with a loop could I have something like: for (var i:uint = 0; i < 4; i++) { icon+i+.button.iconLoad.load(new URLRequest("icons/icon"+i+"jpg")); } Any ideas welcome...

    Read the article

  • Jquery How to use history plugin?

    - by Martijn
    In my web application I am using ajax and now I'd like the back and forward browser buttons to work. So I went looking for a jquery history plugin and found this one: http://stilbuero.de/jquery/history In my code I use a function to load a page: function loadDocument(id, doc) { $("#DocumentContent").show(); // Clear dynamic menu items $("#DynamicMenuContent").html(""); $("#PageContent").html(""); // Load document in frame $("#iframeDocument").attr("src", 'ViewDoc.aspx?id=' + id + '&doc=' + doc + ''); // Load menu items $("#DynamicMenuContent").load("ShowButtons.aspx"); } As you can see I want my pages to load within an Iframe. Can someone tell me how I can use a history plugin so that the brwoser buttons will work? I don't really care which plugin it is, as long as the browser buttons work. I prefer an easy to use plugin.

    Read the article

  • Hibernate: delete many-to-many association

    - by Bar
    I have two tables with the many-to-many association. — DB fragment: loads Id Name sessions Id Date sessionsloads LoadId SessionId — Hibernate mapping fragments: /* loads.hbm.xml */ <set name="sessions" table="sessionsloads" inverse="true"> <key column="LoadId" /> <many-to-many column="SessionId" class="Session" /> </set> … /* sessions.hbm.xml */ <set name="loads" table="sessionsloads"> <key column="SessionId" /> <many-to-many column="LoadId" class="Load" /> </set> In order to remove one entry from the association table sessionsloads, I execute this code: Session session = sessionDao.getObject(sessionId); Load load = loadDao.getObject(loadId); load.getSessions().remove(session); loadDao.saveObject(load); But, after launching, this code change nothing. What's the right way to remove an association?

    Read the article

  • wsdl interoperability problems

    - by manu1001
    I wrote a .asmx web service which I'm trying to consume from a java client. I'm using axis2's wsdl2java to generate code. But it says that the wsdl is invalid. What exactly is the problem here? It is .net which generated the wsdl automatically after all. Are there problems with wsdl standards, rather the lack of them? What can I do now? I'm putting the wsdl here for reference. <?xml version="1.0" encoding="utf-8"?> <wsdl:definitions xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/" xmlns:tns="http://localhost:4148/" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:soap12="http://schemas.xmlsoap.org/wsdl/soap12/" xmlns:http="http://schemas.xmlsoap.org/wsdl/http/" targetNamespace="http://localhost:4148/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/"> <wsdl:types> <s:schema elementFormDefault="qualified" targetNamespace="http://localhost:4148/"> <s:element name="GetUser"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="uid" type="s:string" /> </s:sequence> </s:complexType> </s:element> <s:element name="GetUserResponse"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="GetUserResult" type="tns:User" /> </s:sequence> </s:complexType> </s:element> <s:complexType name="User"> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="HA" type="tns:ComplexT" /> <s:element minOccurs="0" maxOccurs="1" name="AP" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="AL" type="tns:ArrayOfString" /> <s:element minOccurs="0" maxOccurs="1" name="CO" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="EP" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ND" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="AE" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="IE" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="IN" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="HM" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="AN" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="MI" type="tns:ArrayOfString" /> <s:element minOccurs="0" maxOccurs="1" name="NO" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="TL" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="UI" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="DT" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="PT" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="PO" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="AE" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ME" type="tns:ArrayOfString" /> </s:sequence> </s:complexType> <s:complexType name="ComplexT"> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="SR" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="CI" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="TA" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="SC" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="RU" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="HN" type="s:string" /> </s:sequence> </s:complexType> <s:complexType name="ArrayOfString"> <s:sequence> <s:element minOccurs="0" maxOccurs="unbounded" name="string" nillable="true" type="s:string" /> </s:sequence> </s:complexType> </s:schema> </wsdl:types> <wsdl:message name="GetUserSoapIn"> <wsdl:part name="parameters" element="tns:GetUser" /> </wsdl:message> <wsdl:message name="GetUserSoapOut"> <wsdl:part name="parameters" element="tns:GetUserResponse" /> </wsdl:message> <wsdl:portType name="UserServiceSoap"> <wsdl:operation name="GetUser"> <wsdl:input message="tns:GetUserSoapIn" /> <wsdl:output message="tns:GetUserSoapOut" /> </wsdl:operation> </wsdl:portType> <wsdl:binding name="UserServiceSoap" type="tns:UserServiceSoap"> <soap:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="GetUser"> <soap:operation soapAction="http://localhost:4148/GetUser" style="document" /> <wsdl:input> <soap:body use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:binding name="UserServiceSoap12" type="tns:UserServiceSoap"> <soap12:binding transport="http://schemas.xmlsoap.org/soap/http" /> <wsdl:operation name="GetUser"> <soap12:operation soapAction="http://localhost:4148/GetUser" style="document" /> <wsdl:input> <soap12:body use="literal" /> </wsdl:input> <wsdl:output> <soap12:body use="literal" /> </wsdl:output> </wsdl:operation> </wsdl:binding> <wsdl:service name="UserService"> <wsdl:port name="UserServiceSoap" binding="tns:UserServiceSoap"> <soap:address location="http://localhost:4148/Service/UserService.asmx" /> </wsdl:port> <wsdl:port name="UserServiceSoap12" binding="tns:UserServiceSoap12"> <soap12:address location="http://localhost:4148/Service/UserService.asmx" /> </wsdl:port> </wsdl:service> </wsdl:definitions>

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • WCF REST Service Activation Errors when AspNetCompatibility is enabled

    - by Rick Strahl
    I’m struggling with an interesting problem with WCF REST since last night and I haven’t been able to track this down. I have a WCF REST Service set up and when accessing the .SVC file it crashes with a version mismatch for System.ServiceModel: Server Error in '/AspNetClient' Application. Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [TypeLoadException: Could not load type 'System.ServiceModel.Activation.HttpHandler' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'.] System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, Boolean loadTypeFromPartialName, ObjectHandleOnStack type) +0 System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) +95 System.RuntimeType.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) +54 System.Type.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +65 System.Web.Compilation.BuildManager.GetType(String typeName, Boolean throwOnError, Boolean ignoreCase) +69 System.Web.Configuration.HandlerFactoryCache.GetTypeWithAssert(String type) +38 System.Web.Configuration.HandlerFactoryCache.GetHandlerType(String type) +13 System.Web.Configuration.HandlerFactoryCache..ctor(String type) +19 System.Web.HttpApplication.GetFactory(String type) +81 System.Web.MaterializeHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +223 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +184 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 What’s really odd about this is that it crashes only if it runs inside of IIS (it works fine in Cassini) and only if ASP.NET Compatibility is enabled in web.config:<serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> Arrrgh!!!!! After some experimenting and some help from Glenn Block and his team mates I was able to track down the problem in ApplicationHost.config. Specifically the problem was that there were multiple *.svc mappings in the ApplicationHost.Config file and the older 2.0 runtime specific versions weren’t marked for the proper runtime. Because these handlers show up at the top of the list they execute first resulting in assembly load errors for the wrong version assembly. To fix this problem I ended up making a couple changes in applicationhost.config. On the machine level root’s Handler mappings I had an entry that looked like this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode" /> and it needs to be changed to this:<add name="svc-Integrated" path="*.svc" verb="*" type="System.ServiceModel.Activation.HttpHandler, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" preCondition="integratedMode,runtimeVersionv2.0" />Notice the explicit runtime version assignment in the preCondition attribute which is key to keep ASP.NET 4.0 from executing that handler. The key here is that the runtime version needs to be set explicitly so that the various *.svc handlers don’t fire only in the order defined which in case of a .NET 4.0 app with the original setting would result in an incompatible version of System.ComponentModel to load.What was really hard to track this down is that even when looking in the debugger when launching the Web app, the AppDomain assembly loads showed System.ServiceModel V4.0 starting up just fine. Apparently the ASP.NET runtime load occurs at a different point and that’s when things break.So how did this break? According to the Microsoft folks it’s some older tools that got installed that change the default service handlers. There’s a blog entry that points at this problem with more detail:http://blogs.iis.net/webtopics/archive/2010/04/28/system-typeloadexception-for-system-servicemodel-activation-httpmodule-in-asp-net-4.aspxNote that I tried running aspnet_regiis and that did not fix the problem for me. I had to manually change the entries in applicationhost.config.   © Rick Strahl, West Wind Technologies, 2005-2011Posted in AJAX   ASP.NET  WCF  

    Read the article

  • WCF RIA Services DomainContext Abstraction Strategies–Say That 10 Times!

    - by dwahlin
    The DomainContext available with WCF RIA Services provides a lot of functionality that can help track object state and handle making calls from a Silverlight client to a DomainService. One of the questions I get quite often in our Silverlight training classes (and see often in various forums and other areas) is how the DomainContext can be abstracted out of ViewModel classes when using the MVVM pattern in Silverlight applications. It’s not something that’s super obvious at first especially if you don’t work with delegates a lot, but it can definitely be done. There are various techniques and strategies that can be used but I thought I’d share some of the core techniques I find useful. To start, let’s assume you have the following ViewModel class (this is from my Silverlight Firestarter talk available to watch online here if you’re interested in getting started with WCF RIA Services): public class AdminViewModel : ViewModelBase { BookClubContext _Context = new BookClubContext(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, null); } private void LoadBooksCallback(LoadOperation<Book> books) { Books = new ObservableCollection<Book>(books.Entities); } } Notice that BookClubContext is being used directly in the ViewModel class. There’s nothing wrong with that of course, but if other ViewModel objects need to load books then code would be duplicated across classes. Plus, the ViewModel has direct knowledge of how to load data and I like to make it more loosely-coupled. To do this I create what I call a “Service Agent” class. This class is responsible for getting data from the DomainService and returning it to a ViewModel. It only knows how to get and return data but doesn’t know how data should be stored and isn’t used with data binding operations. An example of a simple ServiceAgent class is shown next. Notice that I’m using the Action<T> delegate to handle callbacks from the ServiceAgent to the ViewModel object. Because LoadBooks accepts an Action<ObservableCollection<Book>>, the callback method in the ViewModel must accept ObservableCollection<Book> as a parameter. The callback is initiated by calling the Invoke method exposed by Action<T>: public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), LoadBooksCallback, callback); } public void LoadBooksCallback(LoadOperation<Book> lo) { //Check for errors of course...keeping this brief var books = new ObservableCollection<Book>(lo.Entities); var action = (Action<ObservableCollection<Book>>)lo.UserState; action.Invoke(books); } } This can be simplified by taking advantage of lambda expressions. Notice that in the following code I don’t have a separate callback method and don’t have to worry about passing any user state or casting any user state (the user state is the 3rd parameter in the _Context.Load method call shown above). public class ServiceAgent { BookClubContext _Context = new BookClubContext(); public void LoadBooks(Action<ObservableCollection<Book>> callback) { _Context.Load(_Context.GetBooksQuery(), (lo) => { var books = new ObservableCollection<Book>(lo.Entities); callback.Invoke(books); }, null); } } A ViewModel class can then call into the ServiceAgent to retrieve books yet never know anything about the DomainContext object or even know how data is loaded behind the scenes: public class AdminViewModel : ViewModelBase { ServiceAgent _ServiceAgent = new ServiceAgent(); public AdminViewModel() { if (!DesignerProperties.IsInDesignTool) { LoadBooks(); } } private void LoadBooks() { _ServiceAgent.LoadBooks(LoadBooksCallback); } private void LoadBooksCallback(ObservableCollection<Book> books) { Books = books } } You could also handle the LoadBooksCallback method using a lambda if you wanted to minimize code just like I did earlier with the LoadBooks method in the ServiceAgent class.  If you’re into Dependency Injection (DI), you could create an interface for the ServiceAgent type, reference it in the ViewModel and then inject in the object to use at runtime. There are certainly other techniques and strategies that can be used, but the code shown here provides an introductory look at the topic that should help get you started abstracting the DomainContext out of your ViewModel classes when using WCF RIA Services in Silverlight applications.

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Ubuntu + Wacom Intuos 4 + MyPaint HELP!

    - by Sativa
    Please keep in mind I'm not that computer savvy, but I will try any suggestion so please help me out! My tablet will stop working if the USB connection is ever broken, or the Ubuntu software is being updated. Sometimes it will stop working for no reason that I can see. The lights will still be on, but it won't be responsive. It doesn't work again until I restart the laptop with the tablet plugged in, which is grating if you have to do it every 25 min. or so... I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! Also, MyPaint is frequently having hiccups. It seems to save fine but at times it will randomly close down and when I open files they're often empty. They turn into 0Kb files and only contain a single empty layer. Also very grating, considering I lose days of work for no clear reason and without any heads up. Again, I'm not sure if the issue is with the port, the tablet/cable or the driver but any suggestions would be very welcome! The error message reads; Traceback (most recent call last): File "/usr/share/mypaint/gui/application.py", line 177, at_application_start(*junk=()) else: self.filehandler.open_file(fn) variables: {'fn': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora'), 'self.filehandler.open_file': ('local', <bound method FileHandler.wrapper of <gui.filehandling.FileHandler object at 0x7fdb89063a10>>)} File "/usr/share/mypaint/gui/drawwindow.py", line 60, wrapper(self=<gui.filehandling.FileHandler object>, *args=(u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',), **kwargs={}) try: func(self, *args, **kwargs) # gtk main loop may be called in here... variables: {'self': ('local', <gui.filehandling.FileHandler object at 0x7fdb89063a10>), 'args': ('local', (u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora',)), 'func': ('local', <function open_file at 0x7fdb8b397b90>), 'kwargs': ('local', {})} File "/usr/share/mypaint/gui/filehandling.py", line 231, open_file(self=<gui.filehandling.FileHandler object>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora') try: self.doc.model.load(filename, feedback_cb=self.gtk_main_tick) except document.SaveLoadError, e: variables: {'self.doc.model.load': ('local', <bound method Document.load of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'feedback_cb': (None, []), 'self.gtk_main_tick': ('local', <function gtk_main_tick at 0x7fdb8b397b18>), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 544, load(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', **kwargs={'feedback_cb': <function gtk_main_tick>}) try: load(filename, **kwargs) except gobject.GError, e: variables: {'load': ('local', <bound method Document.load_ora of <lib.document.Document instance at 0x7fdb8ab4f8c0>>), 'kwargs': ('local', {'feedback_cb': <function gtk_main_tick at 0x7fdb8b397b18>}), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/share/mypaint/lib/document.py", line 772, load_ora(self=<lib.document.Document instance>, filename=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', feedback_cb=<function gtk_main_tick>) tempdir = tempdir.decode(sys.getfilesystemencoding()) z = zipfile.ZipFile(filename) print 'mimetype:', z.read('mimetype').strip() variables: {'zipfile.ZipFile': ('global', <class 'zipfile.ZipFile'>), 'z': (None, []), 'filename': ('local', u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora')} File "/usr/lib/python2.7/zipfile.py", line 770, __init__(self=<zipfile.ZipFile object>, file=u'/home/maria/Desktop/Drawings/WIPs/Sativa Chibi.ora', mode='r', compression=0, allowZip64=False) if key == 'r': self._RealGetContents() elif key == 'w': variables: {'self._RealGetContents': ('local', <bound method ZipFile._RealGetContents of <zipfile.ZipFile object at 0x7fdb9b952790>>)} File "/usr/lib/python2.7/zipfile.py", line 811, _RealGetContents(self=<zipfile.ZipFile object>) if not endrec: raise BadZipfile, "File is not a zip file" if self.debug > 1: variables: {'BadZipfile': ('global', <class 'zipfile.BadZipfile'>)} BadZipfile: File is not a zip file

    Read the article

  • Creating a voxel world with 3D arrays using threads

    - by Sean M.
    I am making a voxel game (a bit like Minecraft) in C++(11), and I've come across an issue with creating a world efficiently. In my program, I have a World class, which holds a 3D array of Region class pointers. When I initialize the world, I give it a width, height, and depth so it knows how large of a world to create. Each Region is split up into a 32x32x32 area of blocks, so as you may guess, it takes a while to initialize the world once the world gets to be above 8x4x8 Regions. In order to alleviate this issue, I thought that using threads to generate different levels of the world concurrently would make it go faster. Having not used threads much before this, and being still relatively new to C++, I'm not entirely sure how to go about implementing one thread per level (level being a xz plane with a height of 1), when there is a variable number of levels. I tried this: for(int i = 0; i < height; i++) { std::thread th(std::bind(&World::load, this, width, height, depth)); th.join(); } Where load() just loads all Regions at height "height". But that executes the threads one at a time (which makes sense, looking back), and that of course takes as long as generating all Regions in one loop. I then tried: std::thread t1(std::bind(&World::load, this, w, h1, h2 - 1, d)); std::thread t2(std::bind(&World::load, this, w, h2, h3 - 1, d)); std::thread t3(std::bind(&World::load, this, w, h3, h4 - 1, d)); std::thread t4(std::bind(&World::load, this, w, h4, h - 1, d)); t1.join(); t2.join(); t3.join(); t4.join(); This works in that the world loads about 3-3.5 times faster, but this forces the height to be a multiple of 4, and it also gives the same exact VAO object to every single Region, which need individual VAOs in order to render properly. The VAO of each Region is set in the constructor, so I'm assuming that somehow the VAO number is not thread safe or something (again, unfamiliar with threads). So basically, my question is two one-part: How to I implement a variable number of threads that all execute at the same time, and force the main thread to wait for them using join() without stopping the other threads? How do I make the VAO objects thread safe, so when a bunch of Regions are being created at the same time across multiple threads, they don't all get the exact same VAO? Turns out it has to do with GL contexts not working across multiple threads. I moved the VAO/VBO creation back to the main thread. Fixed! Here is the code for block.h/.cpp, region.h/.cpp, and CVBObject.h/.cpp which controls VBOs and VAOs, in case you need it. If you need to see anything else just ask. EDIT: Also, I'd prefer not to have answers that are like "you should have used boost". I'm trying to do this without boost to get used to threads before moving onto other libraries.

    Read the article

  • Pagination in Tab Content of Jquery UI

    - by DR.GEWA
    I am making a page with Jquery UI TABS. In one of tabs I want to make pagination for comments. I want such thing. When user clicks on the number of the page, the data of the page , which is linked will load via ajax in the tab. Here is my code <script type="text/javascript"> $(function() { $("#tabs").tabs(); }); $('#demo').tabs({ load: function(event, ui) { $('a', ui.panel).click(function() { $(ui.panel).load(this.href); return false; }); } }); </script> How can I change this , so that any link inside the content with lets say with id "pagination" <a href="/coments?page=12&user_id=12" id="pagination">12</a> ?? Thanks beforehand I tried recently like this even and no luck. This is from http://www.bannerite.com/jquerytest/index2.php Here it work <script type="text/javascript"> $(function() { $("#tabs").tabs(); var hijax = function(panel){ $('a', panel).click(function() { /**makes children 'a' on panel load via ajax, hence hijax*/ $(panel).load(this.href, null, function(){ /**recursively apply the hijax to newly loaded panels*/ hijax(panel); }); /**prevents propagation of click to document*/ return false; }); $('form', panel).css({background: 'yellow'}).ajaxForm({ target: panel, }); }; $('#demo ul').tabs({ /**initialize the tabs hijax pattern*/ load: function(tab, panel) { hijax(panel); } }); }); </script> HEre are the HTML part as Karl gived , but no succes. When clicking on Test link its going to another page........ When clicking on I wanted to post HTML here but because of users with less reputation <10 cant give HTML with links I have posted it here http://gevork.ru/2009/12/31/jquery-ui-tab-and-pagination-html/#more-58

    Read the article

  • Eclipse throwing error when copying and pasting

    - by hoffmandirt
    I am using Eclipse 3.5 SR2 for Java EE developers. Each time I press control+C or control+V for the first time after I open a file I get an error. After I close the error, I can successfully copy and paste. The error message made me believe that it was related to the Mylyn plugin, but I uninstalled it and still no difference. Has anyone else experience this problem? I also have the subclipse, adobe flex builder, and maven plugins installed. The 'org.eclipse.mylyn.tasks.ui.hyperlinks.detectors.url' extension from plug-in 'org.eclipse.mylyn.tasks.ui' to the 'org.eclipse.ui.workbench.texteditor.hyperlinkDetectors' extension point failed to load the hyperlink detector. Plug-in org.eclipse.mylyn.tasks.ui was unable to load class org.eclipse.mylyn.internal.tasks.ui.editors.TaskUrlHyperlinkDetector. An error occurred while automatically activating bundle org.eclipse.mylyn.tasks.ui (520). The 'org.eclipse.mylyn.java.hyperlink.detector.stack' extension from plug-in 'org.eclipse.mylyn.java.tasks' to the 'org.eclipse.ui.workbench.texteditor.hyperlinkDetectors' extension point failed to load the hyperlink detector. Plug-in org.eclipse.mylyn.java.tasks was unable to load class org.eclipse.mylyn.internal.java.tasks.JavaStackTraceHyperlinkDetector. org/eclipse/mylyn/tasks/ui/AbstractTaskHyperlinkDetector The 'org.eclipse.mylyn.tasks.ui.hyperlinks.detectors.task' extension from plug-in 'org.eclipse.mylyn.tasks.ui' to the 'org.eclipse.ui.workbench.texteditor.hyperlinkDetectors' extension point failed to load the hyperlink detector. Plug-in org.eclipse.mylyn.tasks.ui was unable to load class org.eclipse.mylyn.internal.tasks.ui.editors.TaskHyperlinkDetector. An error occurred while automatically activating bundle org.eclipse.mylyn.tasks.ui (520).

    Read the article

  • Loading JNI lib on OSX?

    - by Clinton
    Background So I am attempting to load a jnilib (specifically JOGL) into java on OSX at runtime. I have been following along the relevant stackoverflow questions: Maven and the JOGL Library Loading DLL in Java - Eclipse - JNI How to make a jar file that include all jar files The end goal for me is to package platform specific JOGL files into a jar and unzip them into a temp directory and load them at start-up. I worked my problem back to simply attempting to load JOGL using hard-coded paths: File f = new File("/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/libjogl.jnilib"); System.load(f.toString()); f = new File ("/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/libjogl_awt.jnilib"); System.load(f.toString()); I get the following exception when attempting to use the JOGL API: Exception in thread "main" java.lang.UnsatisfiedLinkError: no jogl in java.library.path But when I specify java.library.path by adding the following JVM option: -Djava.library.path="/var/folders/+n/+nfb8NHsHiSpEh6AHMCyvE+++TI/-Tmp-/" Everything works fine. Question Is it possible use System.load (or some other variant) on OSX as a replacement for -Djava.library.path that is invoked at runtime?

    Read the article

  • ejb testing issues with netbeans and openejb

    - by SibzTer
    I have created a netbeans 6.7 EnterpriseApplication project with ejb and war modules with a test stateless session ejb with a simple sayHello() method. I also added the openEjb library in order to unit test the ejb. Everything runs fine except that I keep getting the following error: Testsuite: com.myapp.test.NewEmptyJUnitTest Apache OpenEJB 3.1.1 build: 20090530-06:18 http://openejb.apache.org/ INFO - openejb.home = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - openejb.base = C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb INFO - Configuring Service(id=Default Security Service, type=SecurityService, provider-id=Default Security Service) INFO - Configuring Service(id=Default Transaction Manager, type=TransactionManager, provider-id=Default Transaction Manager) INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Found ClientModule in classpath: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Found EjbModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Found ClientModule in classpath: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant.jar INFO - Beginning load: C:\Program Files\NetBeans 6.7.1\java2\ant\lib\ant-launcher.jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\TestEnterpriseApp-ejb\build\jar INFO - Beginning load: C:\Users\me\Documents\NetBeansProjects\TestEnterpriseApp\lib\OpenEJB\xml-resolver-1.2.jar INFO - Beginning load: C:\Users\me\Documents\Downloads\glassfish\lib\webservices-tools.jar INFO - Configuring enterprise application: classpath.ear WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: ant-launcher.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: xml-resolver-1.2.jar WARN - No application-client.xml found assuming annotations present: classpath.ear, module: webservices-tools.jar java.lang.Exception: Could not load 1/0/com/sun/codemodel/CodeWriter.class at org.apache.xbean.finder.ClassFinder.readClassDef(ClassFinder.java:730) .... Turns out that I am getting the glassfish library webservices-tools.jar from somewhere somehow and I cant find out how to get rid of it so that I dont get the bunch of Exceptions whenever I try to run any junit tests. Has anyone faced this issue before? Can you help me resolve it please? Thanks.

    Read the article

  • QWebView not loading external resources

    - by Nick
    Hi. I'm working on a kiosk web browser using Qt and PyQt4. QWebView seems to work quite well except for one quirk. If a URL fails to load for any reason, I want to redirect the user to a custom error page. I've done this using the loadFinished() signal to check the result, and change the URL to the custom page if necessary using QWebView.load(). However, any page I attempt to load here fails to pull in external resources like CSS or images. Using QWebView.load() to set the initial page at startup seems to work fine, and clicking any link on the custom error page will result in the destination page loading fine. It's just the error page that doesn't work. I'm really not sure where to go next. I've included the source for an app that will replicate the problem below. It takes a URL as a command line argument - a valid URL will display correctly, a bad URL (eg. DNS resolution fails) will redirect to Google, but with the logo missing. import sys from PyQt4 import QtGui, QtCore, QtWebKit class MyWebView(QtWebKit.QWebView): def __init__(self, parent=None): QtWebKit.QWebView.__init__(self, parent) self.resize(800, 600) self.load(QtCore.QUrl(sys.argv[1])) self.connect(self, QtCore.SIGNAL('loadFinished(bool)'), self.checkLoadResult) def checkLoadResult(self, result): if (result == False): self.load(QtCore.QUrl('http://google.com')) app = QtGui.QApplication(sys.argv) main = MyWebView() main.show() sys.exit(app.exec_()) If anyone could offer some advice it would be greatly appreciated.

    Read the article

  • Ninject 2 + ASP.NET MVC 2 Binding Types from External Assemblies

    - by Malkier
    Hi, I'M just trying to get started with Ninject 2 and ASP.NET MVC 2. I have followed this tutorial http://www.craftyfella.com/2010/02/creating-aspnet-mvc-2-controller.html to create a Controller Factory with Ninject and to bind a first abstract to a concrete implementation. Now I want to load a repository type from another assembly (where my concrete SQL Repositories are located) and I just cant get it to work. Here's my code: Global.asax.cs protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory(new MyControllerFactory()); } Controller Factory: public class Kernelhelper { public static IKernel GetTheKernel() { IKernel kernel = new StandardKernel(); kernel.Load(System.Reflection.Assembly.Load("MyAssembly")); return kernel; } } public class MyControllerFactory : DefaultControllerFactory { private IKernel kernel = Kernelhelper.GetTheKernel(); protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { return controllerType == null ? null : (IController)kernel.Get(controllerType); } } In "MyAssembly" there is a Module: public class ExampleConfigModule : NinjectModule { public override void Load() { Bind<Domain.CommunityUserRepository>().To<SQLCommunityUserRepository>(); } } Now when I just slap in a MockRepository object in my entry point it works just fine, the controller, which needs the repository, works fine. The kernel.Load(System.Reflection.Assembly.Load("MyAssembly")); also does its job and registers the module but as soon as I call on the controller which needs the repository I get an ActivationException from Ninject: No matching bindings are available, and the type is not self-bindable. Activation path: 2) Injection of dependency CommunityUserRepository into parameter _rep of constructor of type AccountController 1) Request for AccountController Can anyone give me a best practice example for binding types from external assemblies (which really is an important aspect of Dependency Injection)? Thank you!

    Read the article

  • JNI problem when calling a native library that loads another native library

    - by TheEnemyOfQuality
    I've got a bit of an odd problem. I have a project in C++ that's basically a wrapper for a third party DLL like this: MyLibrary --loads DLL_A ----loads DLL_B I load DLL_A with LoadLibrary(), wrap several of its functions and generate my own DLL. I've tested this in a C++ project and a C# project. Both do everything they're supposed to do: load DLL_A, make a couple of function calls, and indirectly load DLL_B. The problem is when I build a DLL for java and make the calls through JNI. Everything runs like it should (no java.lang.UnsatisfiedLinkError), but when it comes time for DLL_A to load DLL_B it doesn't work. From debugging, the loading of DLL_B happens on a function call in DLL_A that takes a callback. When called from Java, this function call seems to fail (the function pointer is fine and the actual call goes off without a hitch), and I get an odd pop-up window saying DLL_B failed to load, and my program is left waiting for a callback that never happens. I can explicitly load DLL_B just fine (both from Java and from C++) and I've checked every possible path, path variable, and tried placing the dlls everywhere to see if it could be looking somewhere funny. I'm pretty sure it's not a path problem. Ultimately I don't know how DLL_A is loading DLL_B and I can't figure out why everything works fine in C++ and C#, but not in Java. I'm absolutely flummoxed. It could still be something specific to my setup (although I've looked as hard as I can look), but I'm throwing this scenario out there to see if anyone has run into a similar problem. -Dave

    Read the article

  • No acceleration for OpenGL and ImportError for modules that exist

    - by Aku
    I'm writing a program using wxPython and OpenGL. The program works, but without any antialiasing, and I get these error messages: (I'm using ArchLinux) INFO:OpenGL.acceleratesupport:No OpenGL_accelerate module loaded: No module named OpenGL_accelerate INFO:OpenGL.formathandler:Unable to load registered array format handler numpy: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/OpenGL/arrays/formathandler.py", line 44, in loadPlugin plugin_class = entrypoint.load() File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 14, in load return importByName( self.import_path ) File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 28, in importByName module = __import__( ".".join(moduleName), {}, {}, moduleName) File "/usr/lib/python2.6/site-packages/OpenGL/arrays/numpymodule.py", line 11, in <module> raise ImportError( """No numpy module present: %s"""%(err)) ImportError: No numpy module present: No module named numpy INFO:OpenGL.formathandler:Unable to load registered array format handler numeric: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/OpenGL/arrays/formathandler.py", line 44, in loadPlugin plugin_class = entrypoint.load() File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 14, in load return importByName( self.import_path ) File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 28, in importByName module = __import__( ".".join(moduleName), {}, {}, moduleName) File "/usr/lib/python2.6/site-packages/OpenGL/arrays/numeric.py", line 15, in <module> raise ImportError( """No Numeric module present: %s"""%(err)) ImportError: No Numeric module present: No module named Numeric However, when I look into my site-packages folder, I see those modules present there. I have a wxPython demo program that uses GLCanvas, and it works fine, without any errors. My program is quite similar to the GLCanvas demo, involving just translations, rotations, drawing quads and some basic lighting. What am I doing wrong here? (The code is over 200 lines, if necessary I'll edit this and put it here.)

    Read the article

  • Backing up Hyper-V VMs using "wbadmin" failing on Windows Server 2012

    - by Ederson
    I'm trying to backup a single VM using "wbdamin". I'm using this command-line: wbadmin start backup -backupTarget:d: -hyperv:VM_Machine_Name -quiet But the backup have no success. Looking at my Events, I get the following information: Source: SSP Event ID: 16387 Level: Error "Writer Microsoft Hyper-V VSS Writer experienced some error during snapshot creation. More info: ." ============================= Source: Backup Event ID: 521 Level: Error "The backup operation that started at '?2014?-?06?-?11T15:38:44.459000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '0x8100010C'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved." ============================= Source: VSS No Event Errors. I didn't find any info about "0x8100010C" error code through the web and I'm stuck. Anyone know how to fix this?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >