Search Results

Search found 19115 results on 765 pages for 'region specific'.

Page 105/765 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Uncaught TypeError: Object [object Object] has no method 'onAdded'

    - by user3604227
    I am using ExtJS4 with Java servlets. I am following the MVC architecture for ExtJS. I am trying a simple example of displaying a border layout but it doesnt work and I get the following error in ext-all.js in the javascript console: Uncaught TypeError: Object [object Object] has no method 'onAdded' Here is my code: app.js Ext.Loader.setConfig({ enabled : true }); Ext.application({ name : 'IN', appFolder : 'app', controllers : [ 'Items' ], launch : function() { console.log('in LAUNCH-appjs'); Ext.create('Ext.container.Viewport', { items : [ { xtype : 'borderlyt' } ] }); } }); Items.js (controller) Ext.define('IN.controller.Items', { extend : 'Ext.app.Controller', views : [ 'item.Border' ], init : function() { this.control({ 'viewport > panel' : { render : this.onPanelRendered } }); }, onPanelRendered : function() { console.log('The panel was rendered'); } }); Border.js (view) Ext.define('IN.view.item.Border',{extend : 'Ext.layout.container.Border', alias : 'widget.borderlyt', title : 'Border layout' , autoShow : true, renderTo : Ext.getBody(), defaults : { split : true, layout : 'border', autoScroll : true, height : 800, width : 500 }, items : [ { region : 'north', html : "Header here..", id : 'mainHeader' }, { region : 'west', width : 140, html : "Its West..", }, { region : 'south', html : "This is my temp footer content", height : 30, margins : '0 5 5 5', bodyPadding : 2, id : 'mainFooter' }, { id : 'mainContent', collapsible : false, region : 'center', margins : '5', border : true, } ] }); The folder structure for the Webcontent is as follows: WebContent app controller Items.js model store view item Border.js ext_js resources src ext_all.js index.html app.js Can someone help me resolve this error? Thanks in advance

    Read the article

  • Which cast am I using?

    - by Knowing me knowing you
    I'm trying to cast away const from an object but it doesn't work. But if I use old C-way of casting code compiles. So which casting I'm suppose to use to achieve this same effect? I wouldn't like to cast the old way. //file IntSet.h #include "stdafx.h" #pragma once /*Class representing set of integers*/ template<class T> class IntSet { private: T** myData_; std::size_t mySize_; std::size_t myIndex_; public: #pragma region ctor/dtor explicit IntSet(); virtual ~IntSet(); #pragma endregion #pragma region publicInterface IntSet makeUnion(const IntSet&)const; IntSet makeIntersection(const IntSet&)const; IntSet makeSymmetricDifference(const IntSet&)const; void insert(const T&); #pragma endregion }; //file IntSet_impl.h #include "StdAfx.h" #include "IntSet.h" #pragma region ctor/dtor template<class T> IntSet<T>::IntSet():myData_(nullptr), mySize_(0), myIndex_(0) { } IntSet<T>::~IntSet() { } #pragma endregion #pragma region publicInterface template<class T> void IntSet<T>::insert(const T& obj) { /*Check if we are initialized*/ if (mySize_ == 0) { mySize_ = 1; myData_ = new T*[mySize_]; } /*Check if we have place to insert obj in.*/ if (myIndex_ < mySize_) {//IS IT SAFE TO INCREMENT myIndex while assigning? myData_[myIndex_++] = &T(obj);//IF I DO IT THE OLD WAY IT WORKS return; } /*We didn't have enough place...*/ T** tmp = new T*[mySize_];//for copying old to temporary basket std::copy(&myData_[0],&myData_[mySize_],&tmp[0]); } #pragma endregion Thanks.

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Is it possible to store pointers in shared memory without using offsets?

    - by Joseph Garvin
    When using shared memory, each process may mmap the shared region into a different area of their address space. This means that when storing pointers within the shared region, you need to store them as offsets of the start of the shared region. Unfortunately, this complicates use of atomic instructions (e.g. if you're trying to write a lock free algorithm). For example, say you have a bunch of reference counted nodes in shared memory, created by a single writer. The writer periodically atomically updates a pointer 'p' to point to a valid node with positive reference count. Readers want to atomically write to 'p' because it points to the beginning of a node (a struct) whose first element is a reference count. Since p always points to a valid node, incrementing the ref count is safe, and makes it safe to dereference 'p' and access other members. However, this all only works when everything is in the same address space. If the nodes and the 'p' pointer are stored in shared memory, then clients suffer a race condition: x = read p y = x + offset Increment refcount at y During step 2, p may change and x may no longer point to a valid node. The only workaround I can think of is somehow forcing all processes to agree on where to map the shared memory, so that real pointers rather than offsets can be stored in the mmap'd region. Is there any way to do that? I see MAP_FIXED in the mmap documentation, but I don't know how I could pick an address that would be safe.

    Read the article

  • Designing a silverlight dashboard with mef - is it possible? (with dynamic loading of xaps)

    - by Tim Robbin
    Hello! I am just trying to wrap my head around MEF. And as I am really going to love it ( I guess ) I started my first sample project and immediatly stumbled into a big problem and now I am asking myself if I can use MEF for my scenario at all and that is the following: Imagine that one got some kind of dashboard with, let's say, five regions and above each region there are two comboboxes. The values in the first combobox represent different possible views (for example, chartControl, tableControl, pictureControl, ...) and the values of the second combobox represents the different data sources for the currently selected control. As the controls are very big in size one wants to download them as needed. If the user selects one comboboxitem the corresponding control xap should be loaded and displayed in this specific region. If the user selectes another control in the same combobox the control should be removed from the visualtree and the next control should be downloaded and displayed. If the user changes the selection in a different combobox the corresponding control should be loaded again only in this specific region, with perhaps different data. And to make it a little more interesting - as this is some kind of dashboard one can change the layout from five regions to - for example - ten regions. I've seen the video "MVVM with MEF in Silverlight Video Tutorial Part 2: Plugins and Metadata" ( http://csharperimage.jeremylikness.com/2010/03/mvvm-with-mef-in-silverlight-video_09.html ) but he is using an ItemsControl and is working with Visibility and he only got ONE region. So I think that this technique is not working for me... Puh, I hope I could make myself clear! Thanks a lot for any piece of information!!! Greetings, Tim.

    Read the article

  • Internet Explorer randomly drops sessions between pages in cakePHP

    - by Emerson Taymor
    Hello everyone, I've come across an extremely unusual bug that my team has literally no idea how to solve. Doing some research, I found some similar solutions that I thought would work, but alas did not. Here is my situation, let me know if I can provide additional insight to help solve the problem. The first step is that someone chooses a country via a flash map. Flash passes this region name (as well as a date) through the URL, which we then convert to a session. The next page contains no Flash and doesn't display the selected region, but it does hold on to it for further down the process. Everything works perfectly in Safari and Firefox; however, in IE sometimes unexpected results occur. Frequently (but not always), the session is dropped completely and no sessions are stored between the first and 2nd pages. Here are the steps that I have taken thus far, unsuccessfully: 1. Changed Security from Medium - Low 2. Changed CheckUserAgent from True - False 3. Changed storing of sessions from PHP - Database Some additional information that may be useful: I have tried printing out the session data in Debug (debug($_SESSION) on my view file and debug set to 2 in config). In Internet Explorer everything prints out as expected EXCEPT when the region and date don't get set. For example: If the region and date don't get set NOTHING is printed out for debug. I don't get the session details at the top, and I don't get the normal dump of calls at the bottom of the page either. I am not using redirection on these pages. Please let me know if you have ANY idea of what is causing this or any solutions. I am beyond frustrated and have tried as much as I can to solve this. Thanks!

    Read the article

  • RichFaces a4j:support parameter passing

    - by Mark Lewis
    Hello I have a number of rich:inplaceInput tags in RichFaces which represent numbers in an array. The validator allows integers only. When a user clicks in an input and changes a value, how can I get the bean to sort the array given the new number and reRender the list of rich:inplaceInput tags so that they're in numerical order? EG <a4j:region> <rich:dataTable value="#{MyBacking.config}" var="feed" cellpadding="0" cellspacing="0" width="100%" border="0" columns="5" id="Admin"> ... <a4j:repeat... <a4j:region id="MsgCon"> <rich:inplaceInput value="#{h.id}" validator="#{MyBacking.validateID}" id="andID" showControls="true"> <a4j:support event="onviewactivated" action="#{MyBacking.sort}" reRender="Admin" /> </rich:inplaceInput> </a4j:region> </a4j:repeat> </data:Table> </a4j:region> Note I do NOT want to use dataTable sort functions. The table is complicated and I've specified id="Admin" (ie the whole table) to reRender as I've not found a way to send more localised values to the backing bean through the inplaceInput. This question is about how to use a4j:support action attribute to call the sort method so that when the reRender rerenders the component, it outputs the list in sorted order. I have the sort method working ok when I click a button to sort, but I want to have the list sorted automatically as soon as a new valid value is entered into the inplaceInput component. Thanks

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Is it a bug???????????????/

    - by Knowing me knowing you
    I'm using VS2010 Ultimate. Having code: //file IntSet.h #include "stdafx.h" #pragma once /*Class representing set of integers*/ template<class T> class IntSet { private: T** myData_; std::size_t mySize_; std::size_t myIndex_; public: #pragma region ctor/dtor explicit IntSet(); virtual ~IntSet(); #pragma endregion #pragma region publicInterface IntSet makeUnion(const IntSet&)const; IntSet makeIntersection(const IntSet&)const; IntSet makeSymmetricDifference(const IntSet&)const; void insert(const T&); #pragma endregion }; //file IntSet_impl.h #include "StdAfx.h" #include "IntSet.h" #pragma region ctor/dtor template<class T> IntSet<T>::IntSet():myData_(nullptr), mySize_(0), myIndex_(0) { } template<class T> IntSet<T>::~IntSet() { } #pragma endregion #pragma region publicInterface template<class T> void IntSet<T>::insert(const T& obj) {/*IF I SET A BREAKPOINT HERE AND AFTER THAT I CHANGE SOMETHING IN THE BODY I'M GETTING MSG SAYING THAT THE BREAKPOINT WILL NOT CURRENTLY BE HIT, AFTER I REBUILD THE BREAKPOINT IS VALID AGAIN*/ /*Check if we are initialized*/ if (mySize_ == 0) { mySize_ = 1; myData_ = new T*[mySize_]; } /*Check if we have place to insert obj in.*/ if (myIndex_ < mySize_) { myData_[myIndex_++] = new T(obj); return; } /*We didn't have enough place...*/ T** tmp = new T*[mySize_];//for copying old to temporary basket std::copy(&myData_[0],&myData_[mySize_],&tmp[0]); delete myData_; auto oldSize = mySize_; mySize_ *= 2; myData_ = new T*[mySize_]; std::copy(&tmp[0],&tmp[oldSize],&myData_[0]); myData_[myIndex_] = new T(obj); ++myIndex_; } #pragma endregion Thanks.

    Read the article

  • Need help with a custom Spinner/ArrayAdapter setup

    - by MisterSquonk
    I have a WeatherSpinner class which extends Spinner. The class shows region names which I originally did using an ArrayAdapter<String> but I now want to use ArrayAdapter<Locale>(Locale is an abstract 'empty' class of my own). I'm getting a ClassCastException when trying to populate my ArrayAdapter with the following... protected ArrayList<?> theList; protected ArrayAdapter<Locale> aa = null; ... protected void updateContents(ArrayList<?> list, int selectedItem) { theList = list; // Exception thrown on next line aa = new ArrayAdapter<Locale>(theContext, android.R.layout.simple_spinner_item, (Locale[]) theList.toArray()); ... } I'm passing a RegionList object into updateContents() as the 'list' parameter and RegionList extends ArrayList<Region>, and Region extends Locale. I've also overriden Region's toString() method to return a valid String. What am I not seeing here? Am I wrong about the way ArrayList<?>.toArray() works?

    Read the article

  • iPhone SDK: How to center map around a particular point?

    - by buzzappsoftware
    New to MapKit. Having problems centering map around a specified point. Here is the code. Not sure why this is not working. We are expecting to see a map centered around Cincinnati, OH. What we are seeing is the default google map of the world. Any help appreciated. / Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { [super viewDidLoad]; CLLocationCoordinate2D mapCoords[2]; mapCoords[0].latitude = 39.144057; mapCoords[0].latitude = -84.505484; mapCoords[1].latitude = 39.142984; mapCoords[1].latitude = -84.502534; MKCoordinateSpan span; span.latitudeDelta = 0.2; span.longitudeDelta = 0.2; MKCoordinateRegion region; region.center = mapCoords[0]; region.span = span; [mapView setRegion:region animated:YES]; }

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Wrapping ASP.NET Client Callbacks

    - by Ricardo Peres
    Client Callbacks are probably the less known (and I dare say, less loved) of all the AJAX options in ASP.NET, which also include the UpdatePanel, Page Methods and Web Services. The reason for that, I believe, is it’s relative complexity: Get a reference to a JavaScript function; Dynamically register function that calls the above reference; Have a JavaScript handler call the registered function. However, it has some the nice advantage of being self-contained, that is, doesn’t need additional files, such as web services, JavaScript libraries, etc, or static methods declared on a page, or any kind of attributes. So, here’s what I want to do: Have a DOM element which exposes a method that is executed server side, passing it a string and returning a string; Have a server-side event that handles the client-side call; Have two client-side user-supplied callback functions for handling the success and error results. I’m going to develop a custom control without user interface that does the registration of the client JavaScript method as well as a server-side event that can be hooked by some handler on a page. My markup will look like this: 1: <script type="text/javascript"> 1:  2:  3: function onCallbackSuccess(result, context) 4: { 5: } 6:  7: function onCallbackError(error, context) 8: { 9: } 10:  </script> 2: <my:CallbackControl runat="server" ID="callback" SendAllData="true" OnCallback="OnCallback"/> The control itself looks like this: 1: public class CallbackControl : Control, ICallbackEventHandler 2: { 3: #region Public constructor 4: public CallbackControl() 5: { 6: this.SendAllData = false; 7: this.Async = true; 8: } 9: #endregion 10:  11: #region Public properties and events 12: public event EventHandler<CallbackEventArgs> Callback; 13:  14: [DefaultValue(true)] 15: public Boolean Async 16: { 17: get; 18: set; 19: } 20:  21: [DefaultValue(false)] 22: public Boolean SendAllData 23: { 24: get; 25: set; 26: } 27:  28: #endregion 29:  30: #region Protected override methods 31:  32: protected override void Render(HtmlTextWriter writer) 33: { 34: writer.AddAttribute(HtmlTextWriterAttribute.Id, this.ClientID); 35: writer.RenderBeginTag(HtmlTextWriterTag.Span); 36:  37: base.Render(writer); 38:  39: writer.RenderEndTag(); 40: } 41:  42: protected override void OnInit(EventArgs e) 43: { 44: String reference = this.Page.ClientScript.GetCallbackEventReference(this, "arg", "onCallbackSuccess", "context", "onCallbackError", this.Async); 45: String script = String.Concat("\ndocument.getElementById('", this.ClientID, "').callback = function(arg, context, onCallbackSuccess, onCallbackError){", ((this.SendAllData == true) ? "__theFormPostCollection.length = 0; __theFormPostData = ''; WebForm_InitCallback(); " : String.Empty), reference, ";};\n"); 46:  47: this.Page.ClientScript.RegisterStartupScript(this.GetType(), String.Concat("callback", this.ClientID), script, true); 48:  49: base.OnInit(e); 50: } 51:  52: #endregion 53:  54: #region Protected virtual methods 55: protected virtual void OnCallback(CallbackEventArgs args) 56: { 57: EventHandler<CallbackEventArgs> handler = this.Callback; 58:  59: if (handler != null) 60: { 61: handler(this, args); 62: } 63: } 64:  65: #endregion 66:  67: #region ICallbackEventHandler Members 68:  69: String ICallbackEventHandler.GetCallbackResult() 70: { 71: CallbackEventArgs args = new CallbackEventArgs(this.Context.Items["Data"] as String); 72:  73: this.OnCallback(args); 74:  75: return (args.Result); 76: } 77:  78: void ICallbackEventHandler.RaiseCallbackEvent(String eventArgument) 79: { 80: this.Context.Items["Data"] = eventArgument; 81: } 82:  83: #endregion 84: } And the event argument class: 1: [Serializable] 2: public class CallbackEventArgs : EventArgs 3: { 4: public CallbackEventArgs(String argument) 5: { 6: this.Argument = argument; 7: this.Result = String.Empty; 8: } 9:  10: public String Argument 11: { 12: get; 13: private set; 14: } 15:  16: public String Result 17: { 18: get; 19: set; 20: } 21: } You will notice two properties on the CallbackControl: Async: indicates if the call should be made asynchronously or synchronously (the default); SendAllData: indicates if the callback call will include the view and control state of all of the controls on the page, so that, on the server side, they will have their properties set when the Callback event is fired. The CallbackEventArgs class exposes two properties: Argument: the read-only argument passed to the client-side function; Result: the result to return to the client-side callback function, set from the Callback event handler. An example of an handler for the Callback event would be: 1: protected void OnCallback(Object sender, CallbackEventArgs e) 2: { 3: e.Result = String.Join(String.Empty, e.Argument.Reverse()); 4: } Finally, in order to fire the Callback event from the client, you only need this: 1: <input type="text" id="input"/> 2: <input type="button" value="Get Result" onclick="document.getElementById('callback').callback(callback(document.getElementById('input').value, 'context', onCallbackSuccess, onCallbackError))"/> The syntax of the callback function is: arg: some string argument; context: some context that will be passed to the callback functions (success or failure); callbackSuccessFunction: some function that will be called when the callback succeeds; callbackFailureFunction: some function that will be called if the callback fails for some reason. Give it a try and see if it helps!

    Read the article

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • What's the best way to expose a Model object in a ViewModel?

    - by Angel
    In a WPF MVVM application, I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel. Instead of creating separate VM properties, I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy class: public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel: public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM. I know this will cause a strong dependency. So: What's the best way to expose a Model object in a ViewModel? What's the best way to use DataService in VM?

    Read the article

  • MVVM- Expose Model object in ViewModel

    - by Angel
    I have a wpf MVVM application , I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel , and instead of creating seperate VM properties , I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy classes. Here is my Model class : public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel , public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM, I know this will cause a strong dependency. So , 1- Whats the best way to expose Model object in ViewModel ? 2- Whats the best way to use DataService in VM ?

    Read the article

  • VS2010 / Code Analysis: Turn off a rule for a project without custom ruleset....

    - by TomTom
    ...any change? The scenario is this: For our company we develop a standard how code should look. This will be the MS full rule set as it looks now. For some specific projects we may want to turn off specific rules. Simply because for a specific project this is a "known exception". Example? CA1026 - while perfectly ok in most cases, there are 1-2 specific libraries we dont want to change those. We also want to avoid having a custom rule set. OTOH putting in a suppress attribute on every occurance gets pretty convoluted pretty fast. Any way to turn off a code analysis warning for a complete assembly without a custom rule set? We rather have that in a specific file (GlobalSuppressions.cs) than in a rule set for maintenance reasons, and to be more explicit ;)

    Read the article

  • Specification, modeling and programming are principially the same, right?

    - by Gabriel Šcerbák
    In formal specifications based on abstract algebraic types and equational theory you use formulas of equational theory to specify theory. System which will satisfy those constraints is called in formal logic a model. Modeling is process of creating a model, which abstracts of some aspects, which are unnecessary details for a specific case. So concrete system has to adhere to created model in observed aspects. Programming is a process of creating a program which will have specific behaviour - will perform specific algorithms - and programming languages through different paradigms enable us to think in a certain specific way, which abstracts of some details, usually machine specific ones. So could we be doing all those things at the same time, because they are principially the same? Is declarative programming the nearest attempt to do that? Could we use some sort f programming languages which will be good for programming as well as for modeling and specification?

    Read the article

  • Can't control connection bit rate using iwconfig with Atheros TL-WN821N (AR7010)

    - by Paul H
    I'm trying to reduce the connection bit rate on my Atheros TP-Link TL-WN821N v3 usb wifi adapter due to frequent instability issues (reported connection speed goes down to 1Mb/s and I have to physically reconnect the adapter to regain a connection). I know this is a common problem with this device, and I have tried everything I can think of to fix it, including using drivers from linux-backports; compiling and installing a custom firmware (following instructions on https://wiki.debian.org/ath9k_htc#fw-free) and (as a last resort) using ndiswrapper. When using ndiswrapper, the wifi adapter is stable and operates in g mode at 54Mb/s (whilst when using the default ath9k_htc module, the adapter connects in n mode and the bit rate fluctuates constantly). Unfortunately, with this setup I have to run my processor using only one core, since using SMP with ndiswrapper causes a kernel oops on my system. So I want to lock my bit rate to 54Mb/s (or less, if need be) for connection stability, using the ath9k_htc module. I've tried 'sudo iwconfig wlan0 rate 54M'; the command runs with no error but when I check the bit rate with 'sudo iwlist wlan0 bitrate' the command returns: wlan0 unknown bit-rate information. Current Bit Rate:78 Mb/s Any ideas? Here's some info (hopefully relevant) on my setup: Xubuntu (12.04.3) 64bit (kernel 3.2.0-55.85-generic) using Network Manager. My Router is from Virgin Media, the VMDG480. lshw -C network : *-network description: Wireless interface physical id: 1 bus info: usb@1:4 logical name: wlan0 serial: 74:ea:3a:8f:16:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=ath9k_htc driverversion=3.2.0-55 firmware=1.3 ip=192.168.0.9 link=yes multicast=yes wireless=IEEE 802.11bgn lsusb -v: Bus 001 Device 003: ID 0cf3:7015 Atheros Communications, Inc. TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 idVendor 0x0cf3 Atheros Communications, Inc. idProduct 0x7015 TP-Link TL-WN821N v3 802.11n [Atheros AR7010+AR9287] bcdDevice 2.02 iManufacturer 16 ATHEROS iProduct 32 UB95 iSerial 48 12345 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 60 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 6 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x04 EP 4 OUT bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x05 EP 5 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x06 EP 6 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 255 Vendor Specific Class bDeviceSubClass 255 Vendor Specific Subclass bDeviceProtocol 255 Vendor Specific Protocol bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) iwlist wlan0 scanning: wlan0 Scan completed : Cell 01 - Address: C4:3D:C7:3A:1F:5D Channel:1 Frequency:2.412 GHz (Channel 1) Quality=37/70 Signal level=-73 dBm Encryption key:on ESSID:"my essid" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 18 Mb/s 24 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 48 Mb/s Mode:Master Extra:tsf=00000070cca77186 Extra: Last beacon: 5588ms ago IE: Unknown: 0007756E69636F726E IE: Unknown: 010882848B962430486C IE: Unknown: 030101 IE: Unknown: 2A0100 IE: Unknown: 2F0100 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 32040C121860 IE: Unknown: 2D1AFC181BFFFF000000000000000000000000000000000000000000 IE: Unknown: 3D1601080400000000000000000000000000000000000000 IE: Unknown: DD7E0050F204104A0001101044000102103B00010310470010F99C335D7BAC57FB00137DFA79600220102100074E657467656172102300074E6574676561721024000631323334353610420007303030303030311054000800060050F20400011011000743473331303144100800022008103C0001011049000600372A000120 IE: Unknown: DD090010180203F02C0000 IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: DD180050F2020101800003A4000027A4000042435E0062322F00 iwconfig: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"my essid" Mode:Managed Frequency:2.412 GHz Access Point: C4:3D:C7:3A:1F:5D Bit Rate=78 Mb/s Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=36/70 Signal level=-74 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0,

    Read the article

  • Python regex to parse text file, get the items in list and count the list

    - by Nemo
    I have a text file which contains some data. I m particularly interested in finding the count of the number of items in v_dims v_dims pattern in my text file looks like this : v_dims={ "Sales", "Product Family", "Sales Organization", "Region", "Sales Area", "Sales office", "Sales Division", "Sales Person", "Sales Channel", "Sales Order Type", "Sales Number", "Sales Person", "Sales Quantity", "Sales Amount" } So I m thinking of getting all the elements in v_dims and dumping them out in a Python list. Then compute the len(mylist) to get the count of the items. The challenge is in getting all the elements of v_dims from my text file and putting them in an empty list. I m particularly interested in items in v_dims in my text file. The text file has data in the form of v_dims pattern i showed in my original post. Some data has nested patterns of v_dims. Thanks. Here's what I have tried and failed. Any help is appreciated. TIA. import re fname = "C:\Users\XXXX\Test.mrk" with open(fname, "r") as fo: content_as_string = fo.read() match = re.findall(r'v_dims={\"(.+?)\"}',content_as_string) Though I have a big text file, Here's a snippet of what's the structure of my text file version "1"; // Computer generated object language file object 'MRKR' "Main" { Data_Type=2, HeaderBlock={ Version_String="6.3 (25)" }, Printer_Info={ Orientation=0, Page_Width=8.50000000, Page_Height=11.00000000, Page_Header="", Page_Footer="", Margin_type=0, Top_Margin=0.50000000, Left_Margin=0.50000000, Bottom_Margin=0.50000000, Right_Margin=0.50000000 }, Marker_Options={ Close_All="TRUE", Hide_Console="FALSE", Console_Left="FALSE", Console_Width=217, Main_Style="Maximized", MDI_Rect={ 0, 0, 892, 1063 } }, Dives={ { Dive="A", Windows={ { View_Index=0, Window_Info={ Window_Rect={ 0, -288, 400, 1008 }, Window_Style="Maximized Front", Window_Name="Theater [Previous Qtr Diveplan-Dive A]" }, Dependent_bool="FALSE", Colset={ Dive_Type="Normal", Dimension_Name="Theater", Action_List={ Actions={ { Action_Type="Select", select_type=5 }, { Action_Type="Select", select_type=0, Key_Names={ "Theater" }, Key_Indexes={ { "AMERICAS" } } }, { Action_Type="Focus", Focus_Rows="True" }, { Action_Type="Dimensions", v_dims={ "Theater", "Product Family", "Division", "Region", "Install at Country Name", "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "PS Flag", "Avalanche Flag", "Product Item Family" }, Xtab_Bool="False", Xtab_Flip="False" }, { Action_Type="Select", select_type=5 }, { Action_Type="Select", select_type=0, Key_Names={ "Theater", "Product Family", "Division", "Region", "Install at Country Name", "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "PS Flag", "Avalanche Flag" }, Key_Indexes={ { "AMERICAS", "ATMOS", "Latin America CS Division", "37000 CS Region", "Mexico", "", "", "", "", "DIRECT", "EMC", "N", "0" } } } } }, Num_Palette_cols=0, Num_Palette_rows=0 }, Format={ Window_Type="Tabular", Tabular={ Num_row_labels=8 } } } } } }, Widget_Set={ Widget_Layout="Vertical", Go_Button=1, Picklist_Width=0, Sort_Subset_Dimensions="TRUE", Order={ } }, Views={ { Data_Type=1, dbname="Previous Qtr Diveplan", diveline_dbname="Current Qtr Diveplan", logical_name="Current Qtr Diveplan", cols={ { name="Total TSS installs", column_type="Calc[Total TSS installs]", output_type="Number", format_string="." }, { name="TSS Valid Connectivity Records", column_type="Calc[TSS Valid Connectivity Records]", output_type="Number", format_string="." }, { name="% TSS Connectivity Record", column_type="Calc[% TSS Connectivity Record]", output_type="Number" }, { name="TSS Not Applicable", column_type="Calc[TSS Not Applicable]", output_type="Number", format_string="." }, { name="TSS Customer Refusals", column_type="Calc[TSS Customer Refusals]", output_type="Number", format_string="." }, { name="% TSS Refusals", column_type="Calc[% TSS Refusals]", output_type="Number" }, { name="TSS Eligible for Physical Connectivity", column_type="Calc[TSS Eligible for Physical Connectivity]", output_type="Number", format_string="." }, { name="TSS Boxes with Physical Connectivty", column_type="Calc[TSS Boxes with Physical Connectivty]", output_type="Number", format_string="." }, { name="% TSS Physical Connectivity", column_type="Calc[% TSS Physical Connectivity]", output_type="Number" } }, dim_cols={ { name="Model", column_type="Dimension[Model]", output_type="None" }, { name="Model", column_type="Dimension[Model]", output_type="None" }, { name="Connect In Type", column_type="Dimension[Connect In Type]", output_type="None" }, { name="Connect Home Type", column_type="Dimension[Connect Home Type]", output_type="None" }, { name="SymmConnect Enabled", column_type="Dimension[SymmConnect Enabled]", output_type="None" }, { name="Theater", column_type="Dimension[Theater]", output_type="None" }, { name="Division", column_type="Dimension[Division]", output_type="None" }, { name="Region", column_type="Dimension[Region]", output_type="None" }, { name="Sales Order Number", column_type="Dimension[Sales Order Number]", output_type="None" }, { name="Product Item Family", column_type="Dimension[Product Item Family]", output_type="None" }, { name="Item Serial Number", column_type="Dimension[Item Serial Number]", output_type="None" }, { name="Sales Order Deal Number", column_type="Dimension[Sales Order Deal Number]", output_type="None" }, { name="Item Install Date", column_type="Dimension[Item Install Date]", output_type="None" }, { name="SYR Last Dial Home Date", column_type="Dimension[SYR Last Dial Home Date]", output_type="None" }, { name="Maintained By Group", column_type="Dimension[Maintained By Group]", output_type="None" }, { name="PS Flag", column_type="Dimension[PS Flag]", output_type="None" }, { name="Connect Home Refusal Reason", column_type="Dimension[Connect Home Refusal Reason]", output_type="None", col_width=177 }, { name="Cust Name", column_type="Dimension[Cust Name]", output_type="None" }, { name="Sales Order Channel Type", column_type="Dimension[Sales Order Channel Type]", output_type="None" }, { name="Sales Order Type", column_type="Dimension[Sales Order Type]", output_type="None" }, { name="Part Model Key", column_type="Dimension[Part Model Key]", output_type="None" }, { name="Ship Date", column_type="Dimension[Ship Date]", output_type="None" }, { name="Model Number", column_type="Dimension[Model Number]", output_type="None" }, { name="Item Description", column_type="Dimension[Item Description]", output_type="None" }, { name="Customer Classification", column_type="Dimension[Customer Classification]", output_type="None" }, { name="CS Customer Name", column_type="Dimension[CS Customer Name]", output_type="None" }, { name="Install At Customer Number", column_type="Dimension[Install At Customer Number]", output_type="None" }, { name="Install at Country Name", column_type="Dimension[Install at Country Name]", output_type="None" }, { name="TLA Serial Number", column_type="Dimension[TLA Serial Number]", output_type="None" }, { name="Product Version", column_type="Dimension[Product Version]", output_type="None" }, { name="Avalanche Flag", column_type="Dimension[Avalanche Flag]", output_type="None" }, { name="Product Family", column_type="Dimension[Product Family]", output_type="None" }, { name="Project Number", column_type="Dimension[Project Number]", output_type="None" }, { name="PROJECT_STATUS", column_type="Dimension[PROJECT_STATUS]", output_type="None" } }, Available_Columns={ "Total TSS installs", "TSS Valid Connectivity Records", "% TSS Connectivity Record", "TSS Not Applicable", "TSS Customer Refusals", "% TSS Refusals", "TSS Eligible for Physical Connectivity", "TSS Boxes with Physical Connectivty", "% TSS Physical Connectivity", "Total Installs", "All Boxes with Valid Connectivty Record", "% All Connectivity Record", "Overall Refusals", "Overall Refusals %", "All Eligible for Physical Connectivty", "Boxes with Physical Connectivity", "% All with Physical Conectivity" }, Remaining_columns={ { name="Total Installs", column_type="Calc[Total Installs]", output_type="Number", format_string="." }, { name="All Boxes with Valid Connectivty Record", column_type="Calc[All Boxes with Valid Connectivty Record]", output_type="Number", format_string="." }, { name="% All Connectivity Record", column_type="Calc[% All Connectivity Record]", output_type="Number" }, { name="Overall Refusals", column_type="Calc[Overall Refusals]", output_type="Number", format_string="." }, { name="Overall Refusals %", column_type="Calc[Overall Refusals %]", output_type="Number" }, { name="All Eligible for Physical Connectivty", column_type="Calc[All Eligible for Physical Connectivty]", output_type="Number" }, { name="Boxes with Physical Connectivity", column_type="Calc[Boxes with Physical Connectivity]", output_type="Number" }, { name="% All with Physical Conectivity", column_type="Calc[% All with Physical Conectivity]", output_type="Number" } }, calcs={ { name="Total TSS installs", definition="Total[Total TSS installs]", ts_flag="Not TS Calc" }, { name="TSS Valid Connectivity Records", definition="Total[PS Boxes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="% TSS Connectivity Record", definition="Total[PS Boxes w/ valid connectivity record (1=yes)] /Total[Total TSS installs]", ts_flag="Not TS Calc" }, { name="TSS Not Applicable", definition="Total[Bozes w/ valid connectivity record (1=yes)]-Total[Boxes Eligible (1=yes)]-Total[TSS Refusals]", ts_flag="Not TS Calc" }, { name="TSS Customer Refusals", definition="Total[TSS Refusals]", ts_flag="Not TS Calc" }, { name="% TSS Refusals", definition="Total[TSS Refusals]/Total[PS Boxes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="TSS Eligible for Physical Connectivity", definition="Total[TSS Eligible]-Total[Exception]", ts_flag="Not TS Calc" }, { name="TSS Boxes with Physical Connectivty", definition="Total[PS Physical Connectivity] - Total[PS Physical Connectivity, SymmConnect Enabled=\"Capable not enabled\"]", ts_flag="Not TS Calc" }, { name="% TSS Physical Connectivity", definition="Total[Boxes w/ phys conn]/Total[Boxes Eligible (1=yes)]", ts_flag="Not TS Calc" }, { name="Total Installs", definition="Total[Total Installs]", ts_flag="Not TS Calc" }, { name="All Boxes with Valid Connectivty Record", definition="Total[Bozes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="% All Connectivity Record", definition="Total[Bozes w/ valid connectivity record (1=yes)]/Total[Total Installs]", ts_flag="Not TS Calc" }, { name="Overall Refusals", definition="Total[Overall Refusals]", ts_flag="Not TS Calc" }, { name="Overall Refusals %", definition="Total[Overall Refusals]/Total[Bozes w/ valid connectivity record (1=yes)]", ts_flag="Not TS Calc" }, { name="All Eligible for Physical Connectivty", definition="Total[Boxes Eligible (1=yes)]-Total[Exception]", ts_flag="Not TS Calc" }, { name="Boxes with Physical Connectivity", definition="Total[Boxes w/ phys conn]-Total[Boxes w/ phys conn,SymmConnect Enabled=\"Capable not enabled\"]", ts_flag="Not TS Calc" }, { name="% All with Physical Conectivity", definition="Total[Boxes w/ phys conn]/Total[Boxes Eligible (1=yes)]", ts_flag="Not TS Calc" } }, merge_type="consolidate", merge_dbs={ { dbname="connectivityallproducts.mdl", diveline_dbname="/DI_PSREPORTING/connectivityallproducts.mdl" } }, skip_constant_columns="FALSE", categories={ { name="Geography", dimensions={ "Theater", "Division", "Region", "Install at Country Name" } }, { name="Mappings and Flags", dimensions={ "Connect Home Type", "Connect In Type", "SymmConnect Enabled", "Connect Home Refusal Reason", "Sales Order Channel Type", "Maintained By Group", "Customer Installable", "PS Flag", "Top Level Flag", "Avalanche Flag" } }, { name="Product Information", dimensions={ "Product Family", "Product Item Family", "Product Version", "Item Description" } }, { name="Sales Order Info", dimensions={ "Sales Order Deal Number", "Sales Order Number", "Sales Order Type" } }, { name="Dates", dimensions={ "Item Install Date", "Ship Date", "SYR Last Dial Home Date" } }, { name="Details", dimensions={ "Item Serial Number", "TLA Serial Number", "Part Model Key", "Model Number" } }, { name="Customer Infor", dimensions={ "CS Customer Name", "Install At Customer Number", "Customer Classification", "Cust Name" } }, { name="Other Dimensions", dimensions={ "Model" } } }, Maintain_Category_Order="FALSE", popup_info="false" } } };

    Read the article

  • Firefox does not redirect to correct page.

    - by Jaakko
    I'm using Ubuntu 10.04 and Firefox 3.6.3. If I type wrong URL like youtube, it won't redirect me to the correct site. Instead the address would be jar:file:///usr/lib/firefox-3.6.3/chrome/en-US.jar!/locale/browser-region/region.propertiesyoutube and Firefox can't find any page.

    Read the article

  • .screenrc - multiple regions on launch

    - by Rob B
    I know it's possible, but I can't for the life of me figure out how, to launch screen with one window in split region mode? ie: I have screen set to open multiple windows on launch, but want window 0 to be split into two regions with an application running in each region

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >