Search Results

Search found 36081 results on 1444 pages for 'object expected'.

Page 434/1444 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Pixel plot method errors out without error message.

    - by sonny5
    // The following method blows up (big red x on screen) without generating error info. Any // ideas why? // MyPlot.PlotPixel(x, y, Color.BlueViolet, Grf); // runs if commented out // My goal is to draw a pixel on a form. Is there a way to increase the pixel size also? using System; using System.Drawing; using System.Drawing.Drawing2D; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; public class Plot : System.Windows.Forms.Form { private Size _ClientArea; //keeps the pixels info private double _Xspan; private double _Yspan; public Plot() { InitializeComponent(); } public Size ClientArea { set { _ClientArea = value; } } private void InitializeComponent() { this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(400, 300); this.Text="World Plot (world_plot.cs)"; this.Resize += new System.EventHandler(this.Form1_Resize); this.Paint += new System.Windows.Forms.PaintEventHandler(this.doLine); this.Paint += new System.Windows.Forms.PaintEventHandler(this.TransformPoints); // new this.Paint += new System.Windows.Forms.PaintEventHandler(this.DrawRectangleFloat); this.Paint += new System.Windows.Forms.PaintEventHandler(this.DrawWindow_Paint); } private void DrawWindow_Paint(object sender, PaintEventArgs e) { Graphics Grf = e.Graphics; pixPlot(Grf); } static void Main() { Application.Run(new Plot()); } private void doLine(object sender, System.Windows.Forms.PaintEventArgs e) { // no transforms done yet!!! Graphics g = e.Graphics; g.FillRectangle(Brushes.White, this.ClientRectangle); Pen p = new Pen(Color.Black); g.DrawLine(p, 0, 0, 100, 100); // draw DOWN in y, which is positive since no matrix called p.Dispose(); } public void PlotPixel(double X, double Y, Color C, Graphics G) { Bitmap bm = new Bitmap(1, 1); bm.SetPixel(0, 0, C); G.DrawImageUnscaled(bm, TX(X), TY(Y)); } private int TX(double X) //transform real coordinates to pixels for the X-axis { double w; w = _ClientArea.Width / _Xspan * X + _ClientArea.Width / 2; return Convert.ToInt32(w); } private int TY(double Y) //transform real coordinates to pixels for the Y-axis { double w; w = _ClientArea.Height / _Yspan * Y + _ClientArea.Height / 2; return Convert.ToInt32(w); } private void pixPlot(Graphics Grf) { Plot MyPlot = new Plot(); double x = 12.0; double y = 10.0; MyPlot.ClientArea = this.ClientSize; Console.WriteLine("x = {0}", x); Console.WriteLine("y = {0}", y); //MyPlot.PlotPixel(x, y, Color.BlueViolet, Grf); // blows up } private void DrawRectangleFloat(object sender, PaintEventArgs e) { // Create pen. Pen penBlu = new Pen(Color.Blue, 2); // Create location and size of rectangle. float x = 0.0F; float y = 0.0F; float width = 200.0F; float height = 200.0F; // translate DOWN by 200 pixels // Draw rectangle to screen. e.Graphics.DrawRectangle(penBlu, x, y, width, height); } private void TransformPoints(object sender, System.Windows.Forms.PaintEventArgs e) { // after transforms Graphics g = this.CreateGraphics(); Pen penGrn = new Pen(Color.Green, 3); Matrix myMatrix2 = new Matrix(1, 0, 0, -1, 0, 0); // flip Y axis with -1 g.Transform = myMatrix2; g.TranslateTransform(0, 200, MatrixOrder.Append); // translate DOWN the same distance as the rectangle... // ...so this will put it at lower left corner g.DrawLine(penGrn, 0, 0, 100, 90); // notice that y 90 is going UP } private void Form1_Resize(object sender, System.EventArgs e) { Invalidate(); } }

    Read the article

  • Help needed on an SQL configuration problem.

    - by user321048
    I have been banging my head with this one more the two weeks, and still don't know what the problem is ( I can't narrow it down). The problem is the following. I have a solution with 3 project in it all written in c# and I with LINQ. One project is the main web site, the other is the data layer (communication with the database) and the third one is a custom little CMS. The problem is the following: On a hosting provider when I publish the site it all works perfectly, but this site was needed to be hosted on the client server so I needed to do that. But the problem is that I also needed to configure the client server, because they don't have an Administrator employed (I know, I know ;) ). For the first time I some how managed, to set it up but a problem appear. My main web site is working just as it suppose to be - it reads (communicates with) the database, but My CMS is not. It shows the first log in page, but after that when I try to log in it throws the following error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4846887 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity, SqlConnection owningObject) +4860189 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) +90 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) +342 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +221 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +189 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +185 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +31 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +433 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +499 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +65 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +117 System.Data.SqlClient.SqlConnection.Open() +122 System.Data.Linq.SqlClient.SqlConnectionManager.UseConnection(IConnectionUser user) +44 System.Data.Linq.SqlClient.SqlProvider.get_IsSqlCe() +45 System.Data.Linq.SqlClient.SqlProvider.InitializeProviderMode() +20 System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) +57 System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute(Expression expression) +23 System.Linq.Queryable.Count(IQueryable`1 source) +240 CMS.Security.UserProfile.LoginUser() in C:\Documents and Settings\Dimitar\Desktop\New Mepso Final 08_04\CMS\Classes\UserProfile.cs:132 CMS.Default.Login1_Authenticate(Object sender, AuthenticateEventArgs e) in C:\Documents and Settings\Dimitar\Desktop\New Mepso Final 08_04\CMS\Default.aspx.cs:37 System.Web.UI.WebControls.Login.OnAuthenticate(AuthenticateEventArgs e) +108 System.Web.UI.WebControls.Login.AttemptLogin() +115 System.Web.UI.WebControls.Login.OnBubbleEvent(Object source, EventArgs e) +101 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +37 System.Web.UI.WebControls.Button.OnCommand(CommandEventArgs e) +118 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +166 System.Web.UI.WebControls.Button.System.Web.UI.IPostBackEventHandler.RaisePostBackEvent(String eventArgument) +10 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +13 System.Web.UI.Page.RaisePostBackEvent(NameValueCollection postData) +36 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1565 Maybe this is a dumb question, but I cannot find the root of the problem, let alone the solution. So far I have tried the following: -setting time out on connection string to a higher value -configuration and after that turning off server firewall -checking the connection string over and over again (they are the same for all three projects and are saved in web.config) Important notes: I have tried executing the project from VS2008 with a connection string to the same database and the results are the same. That's why I think the problem is the SQL Server 2005 and not the IIS7. Any bit of information is more then welcomed.

    Read the article

  • How to Plug a Small Hole in NetBeans JSF (Join Table) Code Generation

    - by MarkH
    I was asked recently to provide an assist with designing and building a small-but-vital application that had at its heart some basic CRUD (Create, Read, Update, & Delete) functionality, built upon an Oracle database, to be accessible from various locations. Working from the stated requirements, I fleshed out the basic application and database designs and, once validated, set out to complete the first iteration for review. Using SQL Developer, I created the requisite tables, indices, and sequences for our first run. One of the tables was a many-to-many join table with three fields: one a primary key for that table, the other two being primary keys for the other tables, represented as foreign keys in the join table. Here is a simplified example of the trio of tables: Once the database was in decent shape, I fired up NetBeans to let it have first shot at the code. NetBeans does a great job of generating a mountain of essential code, saving developers what must be millions of hours of effort each year by building a basic foundation with a few clicks and keystrokes. Lest you think it (or any tool) can do everything for you, however, occasionally something tosses a paper clip into the delicate machinery and makes you open things up to fix them. Join tables apparently qualify.  :-) In the case above, the entity class generated for the join table (New Entity Classes from Database) included an embedded object consisting solely of the two foreign key fields as attributes, in addition to an object referencing each one of the "component" tables. The Create page generated (New JSF Pages from Entity Classes) worked well to a point, but when trying to save, we were greeted with an error: Transaction aborted. Hmm. A quick debugger session later and I'd identified the issue: when trying to persist the new join-table object, the embedded "foreign-keys-only" object still had null values for its two (required value) attributes...even though the embedded table objects had populated key attributes. Here's the simple fix: In the join-table controller class, find the public String create() method. It will look something like this:     public String create() {        try {            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } To restore balance to the force, modify the create() method as follows (changes in red):     public String create() {         try {            // Add the next two lines to resolve:            current.getJoinEntityPK().setTbl1id(current.getTbl1().getId().toBigInteger());            current.getJoinEntityPK().setTbl2id(current.getTbl2().getId().toBigInteger());            getFacade().create(current);            JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("JoinEntityCreated"));            return prepareCreate();        } catch (Exception e) {            JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));            return null;        }    } I'll be refactoring this code shortly, but for now, it works. Iteration one is complete and being reviewed, and we've met the milestone. Here's to happy endings (and customers)! All the best,Mark

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • Creating a new plugin for mpld3

    - by sjp14051
    Toward learning how to create a new mpld3 plugin, I took an existing example, LinkedDataPlugin (http://mpld3.github.io/examples/heart_path.html), and modified it slightly by deleting references to lines object. That is, I created the following: class DragPlugin(plugins.PluginBase): JAVASCRIPT = r""" mpld3.register_plugin("drag", DragPlugin); DragPlugin.prototype = Object.create(mpld3.Plugin.prototype); DragPlugin.prototype.constructor = DragPlugin; DragPlugin.prototype.requiredProps = ["idpts", "idpatch"]; DragPlugin.prototype.defaultProps = {} function DragPlugin(fig, props){ mpld3.Plugin.call(this, fig, props); }; DragPlugin.prototype.draw = function(){ var patchobj = mpld3.get_element(this.props.idpatch, this.fig); var ptsobj = mpld3.get_element(this.props.idpts, this.fig); var drag = d3.behavior.drag() .origin(function(d) { return {x:ptsobj.ax.x(d[0]), y:ptsobj.ax.y(d[1])}; }) .on("dragstart", dragstarted) .on("drag", dragged) .on("dragend", dragended); patchobj.path.attr("d", patchobj.datafunc(ptsobj.offsets, patchobj.pathcodes)); patchobj.data = ptsobj.offsets; ptsobj.elements() .data(ptsobj.offsets) .style("cursor", "default") .call(drag); function dragstarted(d) { d3.event.sourceEvent.stopPropagation(); d3.select(this).classed("dragging", true); } function dragged(d, i) { d[0] = ptsobj.ax.x.invert(d3.event.x); d[1] = ptsobj.ax.y.invert(d3.event.y); d3.select(this) .attr("transform", "translate(" + [d3.event.x,d3.event.y] + ")"); patchobj.path.attr("d", patchobj.datafunc(ptsobj.offsets, patchobj.pathcodes)); } function dragended(d, i) { d3.select(this).classed("dragging", false); } } mpld3.register_plugin("drag", DragPlugin); """ def __init__(self, points, patch): print "Points ID : ", utils.get_id(points) self.dict_ = {"type": "drag", "idpts": utils.get_id(points), "idpatch": utils.get_id(patch)} However, when I try to link the plugin to a figure, as in plugins.connect(fig, DragPlugin(points[0], patch)) I get an error, 'module' is not callable, pointing to this line. What does this mean and why doesn't it work? Thanks. I'm adding additional code to show that linking more than one Plugin might be problematic. But this may be entirely due to some silly mistake on my part, or there is a way around it. The following code based on LinkedViewPlugin generates three panels, in which the top and the bottom panel are supposed to be identical. Mouseover in the middle panel was expected to control the display in the top and bottom panels, but updates occur in the bottom panel only. It would be nice to be able to figure out how to reflect the changes in multiple panels. Thanks. import matplotlib import matplotlib.pyplot as plt import numpy as np import mpld3 from mpld3 import plugins, utils class LinkedView(plugins.PluginBase): """A simple plugin showing how multiple axes can be linked""" JAVASCRIPT = """ mpld3.register_plugin("linkedview", LinkedViewPlugin); LinkedViewPlugin.prototype = Object.create(mpld3.Plugin.prototype); LinkedViewPlugin.prototype.constructor = LinkedViewPlugin; LinkedViewPlugin.prototype.requiredProps = ["idpts", "idline", "data"]; LinkedViewPlugin.prototype.defaultProps = {} function LinkedViewPlugin(fig, props){ mpld3.Plugin.call(this, fig, props); }; LinkedViewPlugin.prototype.draw = function(){ var pts = mpld3.get_element(this.props.idpts); var line = mpld3.get_element(this.props.idline); var data = this.props.data; function mouseover(d, i){ line.data = data[i]; line.elements().transition() .attr("d", line.datafunc(line.data)) .style("stroke", this.style.fill); } pts.elements().on("mouseover", mouseover); }; """ def __init__(self, points, line, linedata): if isinstance(points, matplotlib.lines.Line2D): suffix = "pts" else: suffix = None self.dict_ = {"type": "linkedview", "idpts": utils.get_id(points, suffix), "idline": utils.get_id(line), "data": linedata} class LinkedView2(plugins.PluginBase): """A simple plugin showing how multiple axes can be linked""" JAVASCRIPT = """ mpld3.register_plugin("linkedview", LinkedViewPlugin2); LinkedViewPlugin2.prototype = Object.create(mpld3.Plugin.prototype); LinkedViewPlugin2.prototype.constructor = LinkedViewPlugin2; LinkedViewPlugin2.prototype.requiredProps = ["idpts", "idline", "data"]; LinkedViewPlugin2.prototype.defaultProps = {} function LinkedViewPlugin2(fig, props){ mpld3.Plugin.call(this, fig, props); }; LinkedViewPlugin2.prototype.draw = function(){ var pts = mpld3.get_element(this.props.idpts); var line = mpld3.get_element(this.props.idline); var data = this.props.data; function mouseover(d, i){ line.data = data[i]; line.elements().transition() .attr("d", line.datafunc(line.data)) .style("stroke", this.style.fill); } pts.elements().on("mouseover", mouseover); }; """ def __init__(self, points, line, linedata): if isinstance(points, matplotlib.lines.Line2D): suffix = "pts" else: suffix = None self.dict_ = {"type": "linkedview", "idpts": utils.get_id(points, suffix), "idline": utils.get_id(line), "data": linedata} fig, ax = plt.subplots(3) # scatter periods and amplitudes np.random.seed(0) P = 0.2 + np.random.random(size=20) A = np.random.random(size=20) x = np.linspace(0, 10, 100) data = np.array([[x, Ai * np.sin(x / Pi)] for (Ai, Pi) in zip(A, P)]) points = ax[1].scatter(P, A, c=P + A, s=200, alpha=0.5) ax[1].set_xlabel('Period') ax[1].set_ylabel('Amplitude') # create the line object lines = ax[0].plot(x, 0 * x, '-w', lw=3, alpha=0.5) ax[0].set_ylim(-1, 1) ax[0].set_title("Hover over points to see lines") linedata = data.transpose(0, 2, 1).tolist() plugins.connect(fig, LinkedView(points, lines[0], linedata)) # second set of lines exactly the same but in a different panel lines2 = ax[2].plot(x, 0 * x, '-w', lw=3, alpha=0.5) ax[2].set_ylim(-1, 1) ax[2].set_title("Hover over points to see lines #2") plugins.connect(fig, LinkedView2(points, lines2[0], linedata)) mpld3.show()

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • Reading data from database and binding them to custom ListView

    - by N.K.
    I try to read data from a database i have made and to show some of the data in a row at a custom ListView. I can not understand what is my mistake. This is my code: public class EsodaMainActivity extends Activity { public static final String ROW_ID = "row_id"; //Intent extra key private ListView esodaListView; // the ListActivitys ListView private SimpleCursorAdapter esodaAdapter; // adapter for ListView DatabaseConnector databaseConnector = new DatabaseConnector(EsodaMainActivity.this); // called when the activity is first created @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_esoda_main); esodaListView = (ListView)findViewById(R.id.esodaList); esodaListView.setOnItemClickListener(viewEsodaListener); databaseConnector.open(); //Cursor cursor= databaseConnector.query("esoda", new String[] // {"name", "amount"}, null,null,null); Cursor cursor=databaseConnector.getAllEsoda(); startManagingCursor(cursor); // map each esoda to a TextView in the ListView layout // The desired columns to be bound String[] from = new String[] {"name","amount"}; // built an String array named "from" //The XML defined views which the data will be bound to int[] to = new int[] { R.id.esodaTextView, R.id.amountTextView}; // built an int array named "to" // EsodaMainActivity.this = The context in which the ListView is running // R.layout.esoda_list_item = Id of the layout that is used to display each item in ListView // null = // from = String array containing the column names to display // to = Int array containing the column names to display esodaAdapter = new SimpleCursorAdapter (this, R.layout.esoda_list_item, cursor, from, to); esodaListView.setAdapter(esodaAdapter); // set esodaView's adapter } // end of onCreate method @Override protected void onResume() { super.onResume(); // call super's onResume method // create new GetEsodaTask and execute it // GetEsodaTask is an AsyncTask object new GetEsodaTask().execute((Object[]) null); } // end of onResume method // onStop method is executed when the Activity is no longer visible to the user @Override protected void onStop() { Cursor cursor= esodaAdapter.getCursor(); // gets current cursor from esodaAdapter if (cursor != null) cursor.deactivate(); // deactivate cursor esodaAdapter.changeCursor(null); // adapter now has no cursor (removes the cursor from the CursorAdapter) super.onStop(); } // end of onStop method // this class performs db query outside the GUI private class GetEsodaTask extends AsyncTask<Object, Object, Cursor> { // we create a new DatabaseConnector obj // EsodaMainActivity.this = Context DatabaseConnector databaseConnector = new DatabaseConnector(EsodaMainActivity.this); // perform the db access @Override protected Cursor doInBackground(Object... params) { databaseConnector.open(); // get a cursor containing call esoda return databaseConnector.getAllEsoda(); // the cursor returned by getAllContacts() is passed to method onPostExecute() } // end of doInBackground method // here we use the cursor returned from the doInBackground() method @Override protected void onPostExecute(Cursor result) { esodaAdapter.changeCursor(result); // set the adapter's Cursor databaseConnector.close(); } // end of onPostExecute() method } // end of GetEsodaTask class // creates the Activity's menu from a menu resource XML file @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); MenuInflater inflater = getMenuInflater(); inflater.inflate(R.menu.esoda_menu, menu); // inflates(eµf?s?) esodamainactivity_menu.xml to the Options menu return true; } // end of onCreateOptionsMenu() method //handles choice from options menu - is executed when the user touches a MenuItem @Override public boolean onOptionsItemSelected(MenuItem item) { // creates a new Intent to launch the AddEditEsoda Activity // EsodaMainActivity.this = Context from which the Activity will be launched // AddEditEsoda.class = target Activity Intent addNewEsoda = new Intent(EsodaMainActivity.this, AddEditEsoda.class); startActivity(addNewEsoda); return super.onOptionsItemSelected(item); } // end of method onPtionsItemSelected() // event listener that responds to the user touching a esoda's name in the ListView OnItemClickListener viewEsodaListener = new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> arg0, View arg1, int arg2, long arg3) { // create an intent to launch the ViewEsoda Activity Intent viewEsoda = new Intent(EsodaMainActivity.this, ViewEsoda.class); // pass the selected esoda's row ID as an extra with the Intent viewEsoda.putExtra(ROW_ID, arg3); startActivity(viewEsoda); // start viewEsoda.class Activity } // end of onItemClick() method }; // end of viewEsodaListener } // end of EsodaMainActivity class The statement: Cursor cursor=databaseConnector.getAllEsoda(); queries all data (columns) From the data I want to show at my custom ListView 2 of them: "name" and "amount". But I still get a debugger error. Please help.

    Read the article

  • Using methods on 2 input files - 2nd is printing multiple times - Java

    - by Aaa
    I have the following code to read in text, store in a hashmap as bigrams (with other methods to sort them by frequency and do v. v. basic additive smoothing. I had it working great for one language input file (english) and then I want to expand it for the second language input file (japanese - doens;t matter what it is I suppose) using the same methods but the Japanese bigram hashmap is printing out 3 times in a row with diff. values. I've tried using diff text in the input file, making sure there are no gaps in text etc. I've also put print statements at certain places in the Japanese part of the code to see if I can get any clues but all the print statements are printing each time so I can't work out if it is looping at a certain place. I have gone through it with a fine toothcomb but am obviously missing something and slowly going crazy here - any help would be appreciated. thanks in advance... package languagerecognition2; import java.lang.String; import java.io.InputStreamReader; import java.util.*; import java.util.Iterator; import java.util.List.*; import java.util.ArrayList; import java.util.AbstractMap.*; import java.lang.Object; import java.io.*; import java.util.Enumeration; import java.util.Arrays; import java.lang.Math; public class Main { /** public static void main(String[] args) { //training English ----------------------------------------------------------------- File file = new File("english1.txt"); StringBuffer contents = new StringBuffer(); BufferedReader reader = null; try { reader = new BufferedReader(new FileReader(file)); String test = null; //test = reader.readLine(); // repeat until all lines are read while ((test = reader.readLine()) != null) { test = test.toLowerCase(); char[] charArrayEng = test.toCharArray(); HashMap<String, Integer> hashMapEng = new HashMap<String, Integer>(bigrams(charArrayEng)); LinkedHashMap<String, Integer> sortedListEng = new LinkedHashMap<String, Integer>(sort(hashMapEng)); int sizeEng=sortedListEng.size(); System.out.println("Total count of English bigrams is " + sizeEng); LinkedHashMap<String, Integer> smoothedListEng = new LinkedHashMap<String, Integer>(smooth(sortedListEng, sizeEng)); //print linkedHashMap to check values Set set= smoothedListEng.entrySet(); Iterator iter = set.iterator ( ) ; System.out.println("Beginning English"); while ( iter.hasNext()) { Map.Entry entry = ( Map.Entry ) iter.next ( ) ; Object key = entry.getKey ( ) ; Object value = entry.getValue ( ) ; System.out.println( key+" : " + value); } System.out.println("End English"); }//end while }//end try catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { try { if (reader != null) { reader.close(); } } catch (IOException e) { e.printStackTrace(); } } //End training English----------------------------------------------------------- //Training japanese-------------------------------------------------------------- File file2 = new File("japanese1.txt"); StringBuffer contents2 = new StringBuffer(); BufferedReader reader2 = null; try { reader2 = new BufferedReader(new FileReader(file2)); String test2 = null; //repeat until all lines are read while ((test2 = reader2.readLine()) != null) { test2 = test2.toLowerCase(); char[] charArrayJap = test2.toCharArray(); HashMap<String, Integer> hashMapJap = new HashMap<String, Integer>(bigrams(charArrayJap)); //System.out.println( "bigrams stage"); LinkedHashMap<String, Integer> sortedListJap = new LinkedHashMap<String, Integer>(sort(hashMapJap)); //System.out.println( "sort stage"); int sizeJap=sortedListJap.size(); //System.out.println("Total count of Japanese bigrams is " + sizeJap); LinkedHashMap<String, Integer> smoothedListJap = new LinkedHashMap<String, Integer>(smooth(sortedListJap, sizeJap)); System.out.println( "smooth stage"); //print linkedHashMap to check values Set set2= smoothedListJap.entrySet(); Iterator iter2 = set2.iterator(); System.out.println("Beginning Japanese"); while ( iter2.hasNext()) { Map.Entry entry2 = ( Map.Entry ) iter2.next ( ) ; Object key = entry2.getKey ( ) ; Object value = entry2.getValue ( ) ; System.out.println( key+" : " + value); }//end while System.out.println("End Japanese"); }//end while }//end try catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { try { if (reader2 != null) { reader2.close(); } } catch (IOException e) { e.printStackTrace(); } } //end training Japanese--------------------------------------------------------- } //end main (inner)

    Read the article

  • how to use 3D map Actionscript class in mxml file for display map.

    - by nemade-vipin
    hello friends, I have created the application in which I have to use 3D map Action Script class in mxml file to display a map in form. that is in tab navigator last tab. My ActionScript 3D map class is(FlyingDirections):- package src.SBTSCoreObject { import src.SBTSCoreObject.JSONDecoder; import com.google.maps.InfoWindowOptions; import com.google.maps.LatLng; import com.google.maps.LatLngBounds; import com.google.maps.Map3D; import com.google.maps.MapEvent; import com.google.maps.MapOptions; import com.google.maps.MapType; import com.google.maps.MapUtil; import com.google.maps.View; import com.google.maps.controls.NavigationControl; import com.google.maps.geom.Attitude; import com.google.maps.interfaces.IPolyline; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.services.Directions; import com.google.maps.services.DirectionsEvent; import com.google.maps.services.Route; import flash.display.Bitmap; import flash.display.DisplayObject; import flash.display.DisplayObjectContainer; import flash.display.Loader; import flash.display.LoaderInfo; import flash.display.Sprite; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.MouseEvent; import flash.events.TimerEvent; import flash.filters.DropShadowFilter; import flash.geom.Point; import flash.net.URLLoader; import flash.net.URLRequest; import flash.net.navigateToURL; import flash.text.TextField; import flash.text.TextFieldAutoSize; import flash.text.TextFormat; import flash.utils.Timer; import flash.utils.getTimer; public class FlyingDirections extends Map3D { /** * Panoramio home page. */ private static const PANORAMIO_HOME:String = "http://www.panoramio.com/"; /** * The icon for the car. */ [Embed("assets/car-icon-24px.png")] private static const Car:Class; /** * The Panoramio icon. */ [Embed("assets/iw_panoramio.png")] private static const PanoramioIcon:Class; /** * We animate a zoom in to the start the route before the car starts * to move. This constant sets the time in seconds over which this * zoom occurs. */ private static const LEAD_IN_DURATION:Number = 3; /** * Duration of the trip in seconds. */ private static const TRIP_DURATION:Number = 40; /** * Constants that define the geometry of the Panoramio image markers. */ private static const BORDER_T:Number = 3; private static const BORDER_L:Number = 10; private static const BORDER_R:Number = 10; private static const BORDER_B:Number = 3; private static const GAP_T:Number = 2; private static const GAP_B:Number = 1; private static const IMAGE_SCALE:Number = 1; /** * Trajectory that the camera follows over time. Each element is an object * containing properties used to generate parameter values for flyTo(..). * fraction = 0 corresponds to the start of the trip; fraction = 1 * correspondsto the end of the trip. */ private var FLY_TRAJECTORY:Array = [ { fraction: 0, zoom: 6, attitude: new Attitude(0, 0, 0) }, { fraction: 0.2, zoom: 8.5, attitude: new Attitude(30, 30, 0) }, { fraction: 0.5, zoom: 9, attitude: new Attitude(30, 40, 0) }, { fraction: 1, zoom: 8, attitude: new Attitude(50, 50, 0) }, { fraction: 1.1, zoom: 8, attitude: new Attitude(130, 50, 0) }, { fraction: 1.2, zoom: 8, attitude: new Attitude(220, 50, 0) }, ]; /** * Number of panaramio photos for which we load data. We&apos;ll select a * subset of these approximately evenly spaced along the route. */ private static const NUM_GEOTAGGED_PHOTOS:int = 50; /** * Number of panaramio photos that we actually show. */ private static const NUM_SHOWN_PHOTOS:int = 7; /** * Scaling between real trip time and animation time. */ private static const SCALE_TIME:Number = 0.001; /** * getTimer() value at the instant that we start the trip. If this is 0 then * we have not yet started the car moving. */ private var startTimer:int = 0; /** * The current route. */ private var route:Route; /** * The polyline for the route. */ private var polyline:IPolyline; /** * The car marker. */ private var marker:Marker; /** * The cumulative duration in seconds over each step in the route. * cumulativeStepDuration[0] is 0; cumulativeStepDuration[1] adds the * duration of step 0; cumulativeStepDuration[2] adds the duration * of step 1; etc. */ private var cumulativeStepDuration:/*Number*/Array = []; /** * The cumulative distance in metres over each vertex in the route polyline. * cumulativeVertexDistance[0] is 0; cumulativeVertexDistance[1] adds the * distance to vertex 1; cumulativeVertexDistance[2] adds the distance to * vertex 2; etc. */ private var cumulativeVertexDistance:Array; /** * Array of photos loaded from Panoramio. This array has the same format as * the &apos;photos&apos; property within the JSON returned by the Panoramio API * (see http://www.panoramio.com/api/), with additional properties added to * individual photo elements to hold the loader structures that fetch * the actual images. */ private var photos:Array = []; /** * Array of polyline vertices, where each element is in world coordinates. * Several computations can be faster if we can use world coordinates * instead of LatLng coordinates. */ private var worldPoly:/*Point*/Array; /** * Whether the start button has been pressed. */ private var startButtonPressed:Boolean = false; /** * Saved event from onDirectionsSuccess call. */ private var directionsSuccessEvent:DirectionsEvent = null; /** * Start button. */ private var startButton:Sprite; /** * Alpha value used for the Panoramio image markers. */ private var markerAlpha:Number = 0; /** * Index of the current driving direction step. Used to update the * info window content each time we progress to a new step. */ private var currentStepIndex:int = -1; /** * The fly directions map constructor. * * @constructor */ public function FlyingDirections() { key="ABQIAAAA7QUChpcnvnmXxsjC7s1fCxQGj0PqsCtxKvarsoS-iqLdqZSKfxTd7Xf-2rEc_PC9o8IsJde80Wnj4g"; super(); addEventListener(MapEvent.MAP_PREINITIALIZE, onMapPreinitialize); addEventListener(MapEvent.MAP_READY, onMapReady); } /** * Handles map preintialize. Initializes the map center and zoom level. * * @param event The map event. */ private function onMapPreinitialize(event:MapEvent):void { setInitOptions(new MapOptions({ center: new LatLng(-26.1, 135.1), zoom: 4, viewMode: View.VIEWMODE_PERSPECTIVE, mapType:MapType.PHYSICAL_MAP_TYPE })); } /** * Handles map ready and looks up directions. * * @param event The map event. */ private function onMapReady(event:MapEvent):void { enableScrollWheelZoom(); enableContinuousZoom(); addControl(new NavigationControl()); // The driving animation will be updated on every frame. addEventListener(Event.ENTER_FRAME, enterFrame); addStartButton(); // We start the directions loading now, so that we&apos;re ready to go when // the user hits the start button. var directions:Directions = new Directions(); directions.addEventListener( DirectionsEvent.DIRECTIONS_SUCCESS, onDirectionsSuccess); directions.addEventListener( DirectionsEvent.DIRECTIONS_FAILURE, onDirectionsFailure); directions.load("48 Pirrama Rd, Pyrmont, NSW to Byron Bay, NSW"); } /** * Adds a big blue start button. */ private function addStartButton():void { startButton = new Sprite(); startButton.buttonMode = true; startButton.addEventListener(MouseEvent.CLICK, onStartClick); startButton.graphics.beginFill(0x1871ce); startButton.graphics.drawRoundRect(0, 0, 150, 100, 10, 10); startButton.graphics.endFill(); var startField:TextField = new TextField(); startField.autoSize = TextFieldAutoSize.LEFT; startField.defaultTextFormat = new TextFormat("_sans", 20, 0xffffff, true); startField.text = "Start!"; startButton.addChild(startField); startField.x = 0.5 * (startButton.width - startField.width); startField.y = 0.5 * (startButton.height - startField.height); startButton.filters = [ new DropShadowFilter() ]; var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.addChild(startButton); startButton.x = 0.5 * (container.width - startButton.width); startButton.y = 0.5 * (container.height - startButton.height); var panoField:TextField = new TextField(); panoField.autoSize = TextFieldAutoSize.LEFT; panoField.defaultTextFormat = new TextFormat("_sans", 11, 0x000000, true); panoField.text = "Photos provided by Panoramio are under the copyright of their owners."; container.addChild(panoField); panoField.x = container.width - panoField.width - 5; panoField.y = 5; } /** * Handles directions success. Starts flying the route if everything * is ready. * * @param event The directions event. */ private function onDirectionsSuccess(event:DirectionsEvent):void { directionsSuccessEvent = event; flyRouteIfReady(); } /** * Handles click on the start button. Starts flying the route if everything * is ready. */ private function onStartClick(event:MouseEvent):void { startButton.removeEventListener(MouseEvent.CLICK, onStartClick); var container:DisplayObjectContainer = getDisplayObject() as DisplayObjectContainer; container.removeChild(startButton); startButtonPressed = true; flyRouteIfReady(); } /** * If we have loaded the directions and the start button has been pressed * start flying the directions route. */ private function flyRouteIfReady():void { if (!directionsSuccessEvent || !startButtonPressed) { return; } var directions:Directions = directionsSuccessEvent.directions; // Extract the route. route = directions.getRoute(0); // Draws the polyline showing the route. polyline = directions.createPolyline(); addOverlay(directions.createPolyline()); // Creates a car marker that is moved along the route. var car:DisplayObject = new Car(); marker = new Marker(route.startGeocode.point, new MarkerOptions({ icon: car, iconOffset: new Point(-car.width / 2, -car.height) })); addOverlay(marker); transformPolyToWorld(); createCumulativeArrays(); // Load Panoramio data for the region covered by the route. loadPanoramioData(directions.bounds); var duration:Number = route.duration; // Start a timer that will trigger the car moving after the lead in time. var leadInTimer:Timer = new Timer(LEAD_IN_DURATION * 1000, 1); leadInTimer.addEventListener(TimerEvent.TIMER, onLeadInDone); leadInTimer.start(); var flyTime:Number = -LEAD_IN_DURATION; // Set up the camera flight trajectory. for each (var flyStep:Object in FLY_TRAJECTORY) { var time:Number = flyStep.fraction * duration; var center:LatLng = latLngAt(time); var scaledTime:Number = time * SCALE_TIME; var zoom:Number = flyStep.zoom; var attitude:Attitude = flyStep.attitude; var elapsed:Number = scaledTime - flyTime; flyTime = scaledTime; flyTo(center, zoom, attitude, elapsed); } } /** * Loads Panoramio data for the route bounds. We load data about more photos * than we need, then select a subset lying along the route. * @param bounds Bounds within which to fetch images. */ private function loadPanoramioData(bounds:LatLngBounds):void { var params:Object = { order: "popularity", set: "full", from: "0", to: NUM_GEOTAGGED_PHOTOS.toString(10), size: "small", minx: bounds.getWest(), miny: bounds.getSouth(), maxx: bounds.getEast(), maxy: bounds.getNorth() }; var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest( "http://www.panoramio.com/map/get_panoramas.php?" + paramsToString(params)); loader.addEventListener(Event.COMPLETE, onPanoramioDataLoaded); loader.addEventListener(IOErrorEvent.IO_ERROR, onPanoramioDataFailed); loader.load(request); } /** * Transforms the route polyline to world coordinates. */ private function transformPolyToWorld():void { var numVertices:int = polyline.getVertexCount(); worldPoly = new Array(numVertices); for (var i:int = 0; i < numVertices; ++i) { var vertex:LatLng = polyline.getVertex(i); worldPoly[i] = fromLatLngToPoint(vertex, 0); } } /** * Returns the time at which the route approaches closest to the * given point. * @param world Point in world coordinates. * @return Route time at which the closest approach occurs. */ private function getTimeOfClosestApproach(world:Point):Number { var minDistSqr:Number = Number.MAX_VALUE; var numVertices:int = worldPoly.length; var x:Number = world.x; var y:Number = world.y; var minVertex:int = 0; for (var i:int = 0; i < numVertices; ++i) { var dx:Number = worldPoly[i].x - x; var dy:Number = worldPoly[i].y - y; var distSqr:Number = dx * dx + dy * dy; if (distSqr < minDistSqr) { minDistSqr = distSqr; minVertex = i; } } return cumulativeVertexDistance[minVertex]; } /** * Returns the array index of the first element that compares greater than * the given value. * @param ordered Ordered array of elements. * @param value Value to use for comparison. * @return Array index of the first element that compares greater than * the given value. */ private function upperBound(ordered:Array, value:Number, first:int=0, last:int=-1):int { if (last < 0) { last = ordered.length; } var count:int = last - first; var index:int; while (count > 0) { var step:int = count >> 1; index = first + step; if (value >= ordered[index]) { first = index + 1; count -= step - 1; } else { count = step; } } return first; } /** * Selects up to a given number of photos approximately evenly spaced along * the route. * @param ordered Array of photos, each of which is an object with * a property &apos;closestTime&apos;. * @param number Number of photos to select. */ private function selectEvenlySpacedPhotos(ordered:Array, number:int):Array { var start:Number = cumulativeVertexDistance[0]; var end:Number = cumulativeVertexDistance[cumulativeVertexDistance.length - 2]; var closestTimes:Array = []; for each (var photo:Object in ordered) { closestTimes.push(photo.closestTime); } var selectedPhotos:Array = []; for (var i:int = 0; i < number; ++i) { var idealTime:Number = start + ((end - start) * (i + 0.5) / number); var index:int = upperBound(closestTimes, idealTime); if (index < 1) { index = 0; } else if (index >= ordered.length) { index = ordered.length - 1; } else { var errorToPrev:Number = Math.abs(idealTime - closestTimes[index - 1]); var errorToNext:Number = Math.abs(idealTime - closestTimes[index]); if (errorToPrev < errorToNext) { --index; } } selectedPhotos.push(ordered[index]); } return selectedPhotos; } /** * Handles completion of loading the Panoramio index data. Selects from the * returned photo indices a subset of those that lie along the route and * initiates load of each of these. * @param event Load completion event. */ private function onPanoramioDataLoaded(event:Event):void { var loader:URLLoader = event.target as URLLoader; var decoder:JSONDecoder = new JSONDecoder(loader.data as String); var allPhotos:Array = decoder.getValue().photos; for each (var photo:Object in allPhotos) { var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); photo.closestTime = getTimeOfClosestApproach(fromLatLngToPoint(latLng, 0)); } allPhotos.sortOn("closestTime", Array.NUMERIC); photos = selectEvenlySpacedPhotos(allPhotos, NUM_SHOWN_PHOTOS); for each (photo in photos) { var photoLoader:Loader = new Loader(); // The images aren&apos;t on panoramio.com: we can&apos;t acquire pixel access // using "new LoaderContext(true)". photoLoader.load( new URLRequest(photo.photo_file_url)); photo.loader = photoLoader; // Save the loader info: we use this to find the original element when // the load completes. photo.loaderInfo = photoLoader.contentLoaderInfo; photoLoader.contentLoaderInfo.addEventListener( Event.COMPLETE, onPhotoLoaded); } } /** * Creates a MouseEvent listener function that will navigate to the given * URL in a new window. * @param url URL to which to navigate. */ private function createOnClickUrlOpener(url:String):Function { return function(event:MouseEvent):void { navigateToURL(new URLRequest(url)); }; } /** * Handles completion of loading an individual Panoramio image. * Adds a custom marker that displays the image. Initially this is made * invisible so that it can be faded in as needed. * @param event Load completion event. */ private function onPhotoLoaded(event:Event):void { var loaderInfo:LoaderInfo = event.target as LoaderInfo; // We need to find which photo element this image corresponds to. for each (var photo:Object in photos) { if (loaderInfo == photo.loaderInfo) { var imageMarker:Sprite = createImageMarker(photo.loader, photo.owner_name, photo.owner_url); var options:MarkerOptions = new MarkerOptions({ icon: imageMarker, hasShadow: true, iconAlignment: MarkerOptions.ALIGN_BOTTOM | MarkerOptions.ALIGN_LEFT }); var latLng:LatLng = new LatLng(photo.latitude, photo.longitude); var marker:Marker = new Marker(latLng, options); photo.marker = marker; addOverlay(marker); // A hack: we add the actual image after the overlay has been added, // which creates the shadow, so that the shadow is valid even if we // don&apos;t have security privileges to generate the shadow from the // image. marker.foreground.visible = false; marker.shadow.alpha = 0; var imageHolder:Sprite = new Sprite(); imageHolder.addChild(photo.loader); imageHolder.buttonMode = true; imageHolder.addEventListener( MouseEvent.CLICK, createOnClickUrlOpener(photo.photo_url)); imageMarker.addChild(imageHolder); return; } } trace("An image was loaded which could not be found in the photo array."); } /** * Creates a custom marker showing an image. */ private function createImageMarker(child:DisplayObject, ownerName:String, ownerUrl:String):Sprite { var content:Sprite = new Sprite(); var panoramioIcon:Bitmap = new PanoramioIcon(); var iconHolder:Sprite = new Sprite(); iconHolder.addChild(panoramioIcon); iconHolder.buttonMode = true; iconHolder.addEventListener(MouseEvent.CLICK, onPanoramioIconClick); panoramioIcon.x = BORDER_L; panoramioIcon.y = BORDER_T; content.addChild(iconHolder); // NOTE: we add the image as a child only after we&apos;ve added the marker // to the map. Currently the API requires this if it&apos;s to generate the // shadow for unprivileged content. // Shrink the image, so that it doesn&apos;t obcure too much screen space. // Ideally, we&apos;d subsample, but we don&apos;t have pixel level access. child.scaleX = IMAGE_SCALE; child.scaleY = IMAGE_SCALE; var imageW:Number = child.width; var imageH:Number = child.height; child.x = BORDER_L + 30; child.y = BORDER_T + iconHolder.height + GAP_T; var authorField:TextField = new TextField(); authorField.autoSize = TextFieldAutoSize.LEFT; authorField.defaultTextFormat = new TextFormat("_sans", 12); authorField.text = "author:"; content.addChild(authorField); authorField.x = BORDER_L; authorField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var ownerField:TextField = new TextField(); ownerField.autoSize = TextFieldAutoSize.LEFT; var textFormat:TextFormat = new TextFormat("_sans", 14, 0x0e5f9a); ownerField.defaultTextFormat = textFormat; ownerField.htmlText = "<a href=\"" + ownerUrl + "\" target=\"_blank\">" + ownerName + "</a>"; content.addChild(ownerField); ownerField.x = BORDER_L + authorField.width; ownerField.y = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B; var totalW:Number = BORDER_L + Math.max(imageW, ownerField.width + authorField.width) + BORDER_R; var totalH:Number = BORDER_T + iconHolder.height + GAP_T + imageH + GAP_B + ownerField.height + BORDER_B; content.graphics.beginFill(0xffffff); content.graphics.drawRoundRect(0, 0, totalW, totalH, 10, 10); content.graphics.endFill(); var marker:Sprite = new Sprite(); marker.addChild(content); content.x = 30; content.y = 0; marker.graphics.lineStyle(); marker.graphics.beginFill(0xff0000); marker.graphics.drawCircle(0, totalH + 30, 3); marker.graphics.endFill(); marker.graphics.lineStyle(2, 0xffffff); marker.graphics.moveTo(30 + 10, totalH - 10); marker.graphics.lineTo(0, totalH + 30); return marker; } /** * Handles click on the Panoramio icon. */ private function onPanoramioIconClick(event:MouseEvent):void { navigateToURL(new URLRequest(PANORAMIO_HOME)); } /** * Handles failure of a Panoramio image load. */ private function onPanoramioDataFailed(event:IOErrorEvent):void { trace("Load of image failed: " + event); } /** * Returns a string containing cgi query parameters. * @param Associative array mapping query parameter key to value. * @return String containing cgi query parameters. */ private static function paramsToString(params:Object):String { var result:String = ""; var separator:String = ""; for (var key:String in params) { result += separator + encodeURIComponent(key) + "=" + encodeURIComponent(params[key]); separator = "&"; } return result; } /** * Called once the lead-in flight is done. Starts the car driving along * the route and starts a timer to begin fade in of the Panoramio * images in 1.5 seconds. */ private function onLeadInDone(event:Event):void { // Set startTimer non-zero so that the car starts to move. startTimer = getTimer(); // Start a timer that will fade in the Panoramio images. var fadeInTimer:Timer = new Timer(1500, 1); fadeInTimer.addEventListener(TimerEvent.TIMER, onFadeInTimer); fadeInTimer.start(); } /** * Handles the fade in timer&apos;s TIMER event. Sets markerAlpha above zero * which causes the frame enter handler to fade in the markers. */ private function onFadeInTimer(event:Event):void { markerAlpha = 0.01; } /** * The end time of the flight. */ private function get endTime():Number { if (!cumulativeStepDuration || cumulativeStepDuration.length == 0) { return startTimer; } return startTimer + cumulativeStepDuration[cumulativeStepDuration.length - 1]; } /** * Creates the cumulative arrays, cumulativeStepDuration and * cumulativeVertexDistance. */ private function createCumulativeArrays():void { cumulativeStepDuration = new Array(route.numSteps + 1); cumulativeVertexDistance = new Array(polyline.getVertexCount() + 1); var polylineTotal:Number = 0; var total:Number = 0; var numVertices:int = polyline.getVertexCount(); for (var stepIndex:int = 0; stepIndex < route.numSteps; ++stepIndex) { cumulativeStepDuration[stepIndex] = total; total += route.getStep(stepIndex).duration; var startVertex:int = stepIndex >= 0 ? route.getStep(stepIndex).polylineIndex : 0; var endVertex:int = stepIndex < (route.numSteps - 1) ? route.getStep(stepIndex + 1).polylineIndex : numVertices; var duration:Number = route.getStep(stepIndex).duration; var stepVertices:int = endVertex - startVertex; var latLng:LatLng = polyline.getVertex(startVertex); for (var vertex:int = startVertex; vertex < endVertex; ++vertex) { cumulativeVertexDistance[vertex] = polylineTotal; if (vertex < numVertices - 1) { var nextLatLng:LatLng = polyline.getVertex(vertex + 1); polylineTotal += nextLatLng.distanceFrom(latLng); } latLng = nextLatLng; } } cumulativeStepDuration[stepIndex] = total; } /** * Opens the info window above the car icon that details the given * step of the driving directions. * @param stepIndex Index of the current step. */ private function openInfoForStep(stepIndex:int):void { // Sets the content of the info window. var content:String; if (stepIndex >= route.numSteps) { content = "<b>" + route.endGeocode.address + "</b>" + "<br /><br />" + route.summaryHtml; } else { content = "<b>" + stepIndex + ".</b> " + route.getStep(stepIndex).descriptionHtml; } marker.openInfoWindow(new InfoWindowOptions({ contentHTML: content })); } /** * Displays the driving directions step appropriate for the given time. * Opens the info window showing the step instructions each time we * progress to a new step. * @param time Time for which to display the step. */ private function displayStepAt(time:Number):void { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex >= 0 && stepIndex <= maxStepIndex && currentStepIndex != stepIndex) { openInfoForStep(stepIndex); currentStepIndex = stepIndex; } } /** * Returns the LatLng at which the car should be positioned at the given * time. * @param time Time for which LatLng should be found. * @return LatLng. */ private function latLngAt(time:Number):LatLng { var stepIndex:int = upperBound(cumulativeStepDuration, time) - 1; var minStepIndex:int = 0; var maxStepIndex:int = route.numSteps - 1; if (stepIndex < minStepIndex) { return route.startGeocode.point; } else if (stepIndex > maxStepIndex) { return route.endGeocode.point; } var stepStart:Number = cumulativeStepDuration[stepIndex]; var stepEnd:Number = cumulativeStepDuration[stepIndex + 1]; var stepFraction:Number = (time - stepStart) / (stepEnd - stepStart); var startVertex:int = route.getStep(stepIndex).polylineIndex; var endVertex:int = (stepIndex + 1) < route.numSteps ? route.getStep(stepIndex + 1).polylineIndex : polyline.getVertexCount(); var stepVertices:int = endVertex - startVertex; var stepLeng

    Read the article

  • Problems installing Memcache (PECL extension)

    - by Petrus
    I have installed memcached fine, and now I will need to install PECL extension memcache. Im running RedHat x86_64 es5. The installation gives me this: downloading memcache-2.2.6.tgz ... Starting to download memcache-2.2.6.tgz (35,957 bytes) ..........done: 35,957 bytes 11 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 Enable memcache session handler support? [yes] : Notice: Use of undefined constant STDIN - assumed 'STDIN' in PEAR/Frontend/CLI.php on line 304 Warning: fgets() expects parameter 1 to be resource, string given in PEAR/Frontend/CLI.php on line 304 Warning: fgets() expects parameter 1 to be resource, string given in /usr/lib/php/PEAR/Frontend/CLI.php on line 304 building in /root/tmp/pear-build-root/memcache-2.2.6 running: /root/tmp/pear/memcache/configure --enable-memcache-session=yes checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... invalid configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking whether to enable memcache support... yes, shared checking whether to enable memcache session handler support... yes checking for the location of ZLIB... no checking for the location of zlib... /usr checking for session includes... /usr/include/php checking for memcache session support... enabled checking for ld used by cc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognize dependent libraries... pass_all checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking the maximum length of command line arguments... 98304 checking command to parse /usr/bin/nm -B output from cc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if cc supports -fno-rtti -fno-exceptions... no checking for cc option to produce PIC... -fPIC checking if cc PIC flag -fPIC works... yes checking if cc static flag -static works... yes checking if cc supports -c -o file.o... yes checking whether the cc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no creating libtool appending configuration tag "CXX" to libtool configure: creating ./config.status config.status: creating config.h running: make /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache.c -o memcache.lo mkdir .libs cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache.c -fPIC -DPIC -o .libs/memcache.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_queue.c -o memcache_queue.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_queue.c -fPIC -DPIC -o .libs/memcache_queue.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_standard_hash.c -o memcache_standard_hash.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_standard_hash.c -fPIC -DPIC -o .libs/memcache_standard_hash.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_consistent_hash.c -o memcache_consistent_hash.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_consistent_hash.c -fPIC -DPIC -o .libs/memcache_consistent_hash.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=compile cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_session.c -o memcache_session.lo cc -I/usr/include/php -I. -I/root/tmp/pear/memcache -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/tmp/pear/memcache/memcache_session.c -fPIC -DPIC -o .libs/memcache_session.o /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=link cc -DPHP_ATOM_INC -I/root/tmp/pear-build-root/memcache-2.2.6/include -I/root/tmp/pear-build-root/memcache-2.2.6/main -I/root/tmp/pear/memcache -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -o memcache.la -export-dynamic -avoid-version -prefer-pic -module -rpath /root/tmp/pear-build-root/memcache-2.2.6/modules memcache.lo memcache_queue.lo memcache_standard_hash.lo memcache_consistent_hash.lo memcache_session.lo cc -shared .libs/memcache.o .libs/memcache_queue.o .libs/memcache_standard_hash.o .libs/memcache_consistent_hash.o .libs/memcache_session.o -Wl,-soname -Wl,memcache.so -o .libs/memcache.so creating memcache.la (cd .libs && rm -f memcache.la && ln -s ../memcache.la memcache.la) /bin/sh /root/tmp/pear-build-root/memcache-2.2.6/libtool --mode=install cp ./memcache.la /root/tmp/pear-build-root/memcache-2.2.6/modules cp ./.libs/memcache.so /root/tmp/pear-build-root/memcache-2.2.6/modules/memcache.so cp ./.libs/memcache.lai /root/tmp/pear-build-root/memcache-2.2.6/modules/memcache.la PATH="$PATH:/sbin" ldconfig -n /root/tmp/pear-build-root/memcache-2.2.6/modules ---------------------------------------------------------------------- Libraries have been installed in: /root/tmp/pear-build-root/memcache-2.2.6/modules If you ever happen to want to link against installed libraries in a given directory, LIBDIR, you must either use libtool, and specify the full pathname of the library, or use the `-LLIBDIR' flag during linking and do at least one of the following: - add LIBDIR to the `LD_LIBRARY_PATH' environment variable during execution - add LIBDIR to the `LD_RUN_PATH' environment variable during linking - use the `-Wl,--rpath -Wl,LIBDIR' linker flag - have your system administrator add LIBDIR to `/etc/ld.so.conf' See any operating system documentation about shared libraries for more information, such as the ld(1) and ld.so(8) manual pages. ---------------------------------------------------------------------- Build complete. Don't forget to run 'make test'. running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-memcache-2.2.6" install Installing shared extensions: /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-memcache-2.2.6" | xargs ls -dils 361232 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6 361263 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr 361264 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib 361265 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php 361266 4 drwxr-xr-x 3 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions 361267 4 drwxr-xr-x 2 root root 4096 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626 361262 236 -rwxr-xr-x 1 root root 235575 Jan 28 10:47 /root/tmp/pear-build-root/install-memcache-2.2.6/usr/lib/php/extensions/no-debug-non-zts-20090626/memcache.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.6 Extension memcache enabled in php.ini The memcache.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 I tried as well to install this extension "memcached 1.0.2 (PHP extension for interfacing with memcached via libmemcached library)" but it failed: downloading memcached-1.0.2.tgz ... Starting to download memcached-1.0.2.tgz (22,724 bytes) ........done: 22,724 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /root/tmp/pear-build-root/memcached-1.0.2 running: /root/tmp/pear/memcached/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether cc accepts -g... yes checking for cc option to accept ANSI C... none needed checking how to run the C preprocessor... cc -E checking for icc... no checking for suncc... no checking whether cc understands -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib checking for PHP extension directory... /usr/lib/php/extensions/no-debug-non-zts-20090626 checking for PHP installed headers prefix... /usr/include/php checking if debug is enabled... no checking if zts is enabled... no checking for re2c... re2c checking for re2c version... invalid configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP parsers. checking for gawk... gawk checking whether to enable memcached support... yes, shared checking for libmemcached... yes, shared checking whether to enable memcached session handler support... yes checking whether to enable memcached igbinary serializer support... no checking for ZLIB... yes, shared checking for zlib location... /usr checking for session includes... /usr/include/php checking for memcached session support... enabled checking for memcached igbinary support... disabled checking for libmemcached location... configure: error: memcached support requires libmemcached. Use --with-libmemcached-dir= to specify the prefix where libmemcached headers and library are located ERROR: `/root/tmp/pear/memcached/configure' failed The memcached.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Is there a kind soul out there that can solve this puzzle?

    Read the article

  • Removing the XML Formatter from ASP.NET Web API Applications

    - by Rick Strahl
    ASP.NET Web API's default output format is supposed to be JSON, but when I access my Web APIs using the browser address bar I'm always seeing an XML result instead. When working on AJAX application I like to test many of my AJAX APIs with the browser while working on them. While I can't debug all requests this way, GET requests are easy to test in the browser especially if you have JSON viewing options set up in your various browsers. If I preview a Web API request in most browsers I get an XML response like this: Why is that? Web API checks the HTTP Accept headers of a request to determine what type of output it should return by looking for content typed that it has formatters registered for. This automatic negotiation is one of the great features of Web API because it makes it easy and transparent to request different kinds of output from the server. In the case of browsers it turns out that most send Accept headers that look like this (Chrome in this case): Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Web API inspects the entire list of headers from left to right (plus the quality/priority flag q=) and tries to find a media type that matches its list of supported media types in the list of formatters registered. In this case it matches application/xml to the Xml formatter and so that's what gets returned and displayed. To verify that Web API indeed defaults to JSON output by default you can open the request in Fiddler and pop it into the Request Composer, remove the application/xml header and see that the output returned comes back in JSON instead. An accept header like this: Accept: text/html,application/xhtml+xml,*/*;q=0.9 or leaving the Accept header out altogether should give you a JSON response. Interestingly enough Internet Explorer 9 also displays JSON because it doesn't include an application/xml Accept header: Accept: text/html, application/xhtml+xml, */* which for once actually seems more sensible. Removing the XML Formatter We can't easily change the browser Accept headers (actually you can by delving into the config but it's a bit of a hassle), so can we change the behavior on the server? When working on AJAX applications I tend to not be interested in XML results and I always want to see JSON results at least during development. Web API uses a collection of formatters and you can go through this list and remove the ones you don't want to use - in this case the XmlMediaTypeFormatter. To do this you can work with the HttpConfiguration object and the static GlobalConfiguration object used to configure it: protected void Application_Start(object sender, EventArgs e) { // Action based routing (used for RPC calls) RouteTable.Routes.MapHttpRoute( name: "StockApi", routeTemplate: "stocks/{action}/{symbol}", defaults: new { symbol = RouteParameter.Optional, controller = "StockApi" } ); // WebApi Configuration to hook up formatters and message handlers RegisterApis(GlobalConfiguration.Configuration); } public static void RegisterApis(HttpConfiguration config) { // remove default Xml handler var matches = config.Formatters .Where(f = f.SupportedMediaTypes .Where(m = m.MediaType.ToString() == "application/xml" || m.MediaType.ToString() == "text/xml") .Count() 0) .ToList() ; foreach (var match in matches) config.Formatters.Remove(match); } } That LINQ code is quite a mouthful of nested collections, but it does the trick to remove the formatter based on the content type. You can also look for the specific formatter (XmlMediatTypeFormatter) by its type name which is simpler, but it's better to search for the supported types as this will work even if there are other custom formatters added. Once removed, now the browser request results in a JSON response: It's a simple solution to a small debugging task that's made my life easier. Maybe you find it useful too…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • JavaOne Tutorial Report - JavaFX 2 – A Java Developer’s Guide

    - by Janice J. Heiss
    Oracle Java Technology Evangelist Stephen Chin and Independent Consultant Peter Pilgrim presented a tutorial session intended to help developers get a handle on JavaFX 2. Stephen Chin, a Java Champion, is co-author of the Pro JavaFX Platform 2, while Java Champion Peter Pilgrim is an independent consultant who works out of London.NightHacking with Stephen ChinBefore discussing the tutorial, a note about Chin’s “NightHacking Tour,” wherein from 10/29/12 to 11/11/12, he will be traveling across Europe via motorcycle stopping at JUGs and interviewing Java developers and offering live video streaming of the journey. As he says, “Along the way, I will visit user groups, interviewing interesting folks, and hack on open source projects. The last stop will be the Devoxx conference in Belgium.”It’s a dirty job but someone’s got to do it. His trip will take him from the UK through the Netherlands, Germany, Switzerland, Italy, France, and finally to Devoxx in Belgium. He has interviews lined up with Ben Evans, Trisha Gee, Stephen Coulebourne, Martijn Verburg, Simon Ritter, Bert Ertman, Tony Epple, Adam Bien, Michael Hutterman, Sven Reimers, Andres Almiray, Gerrit Grunewald, Bertrand Boetzmann, Luc Duponcheel, Stephen Janssen, Cheryl Miller, and Andrew Phillips. If you expect to be in Chin’s vicinity at the end of October and in early November, by all means get in touch with him at his site and add your perspective. The more the merrier! Taking the JavaFX PlungeNow to the business at hand. The “JavaFX 2 – A Java Developer’s Guide” tutorial introduced Java developers to the JavaFX 2 platform from the perspective of seasoned Java developers. It demonstrated the breadth of the JavaFX APIs through examples that are built out in the course of the session in an effort to present the basic requirements in using JavaFX to build rich internet applications. Chin began with a quote from Oracle’s Christopher Oliver, the creator of F3, the original version of JavaFX, on the importance of GUIs:“At the end of the day, on the one hand we have computer systems, and on the other, people. Connecting them together, and allowing people to interact with computer systems in a compelling way, requires graphical user interfaces.”Chin explained that JavaFX is about producing an immersive application experience that involves cross-platform animation, video and charting. It can integrate Java, JavaScript and HTML in the same application. The new graphics stack takes advantage of hardware acceleration for 2D and 3D applications. In addition, we can integrate Swing applications using JFXPanel.He reminded attendees that they were building JavaFX apps using pure Java APIs that included builders for declarative construction; in addition, alternative languages can be used for simpler UI creation. In addition, developers can call upon alternative languages such as GroovyFX, ScalaFX and Visage, if they want simpler UI creation. He presented the fundamentals of JavaFX 2.0: properties, lists and binding and then explored primitive, object and FX list collection properties. Properties in JavaFX are observable, lazy and type safe. He then provided an example of property declaration in code.  Pilgrim and Chin explained the architectural structure of JavaFX 2 and its basic properties:JavaFX 2.0 properties – Primitive, Object, and FX List Collection properties. * Primitive Properties* Object Properties* FX List Collection Properties* Properties are:– Observable– Lazy– Type SafeChin and Pilgrim then took attendees through several participatory demos and got deep into the weeds of the code for the two-hour session. At the end, everyone knew a lot more about the inner workings of JavaFX 2.0.

    Read the article

  • How to configure Visual Studio 2010 code coverage for ASP.NET MVC unit tests

    - by DigiMortal
    I just got Visual Studio 2010 code coverage work with ASP.NET MVC application unit tests. Everything is simple after you have spent some time with forums, blogs and Google. To save your valuable time I wrote this posting to guide you through the process of making code coverage work with ASP.NET MVC application unit tests. After some fighting with Visual Studio I got everything to work as expected. I am still not very sure why users must deal with this mess, but okay – I survived it. Before you start configuring Visual Studio I expect your solution meets the following needs: there are at least one library that will be tested, there is at least on library that contains tests to be run, there are some classes and some tests for them, and, of course, you are using version of Visual Studio 2010 that supports tests (I have Visual Studio 2010 Ultimate). Now open the following screenshot to separate windows and follow the steps given below. Visual Studio 2010 Test Settings window. Click on image to see it at original size.  Double click on Local.testsettings under Solution Items. Test settings window will be opened. Select “Data and Diagnostics” from left pane. Mark checkboxes “ASP.NET Profiler” and “Code Coverage”. Move cursor to “Code Coverage” line and press Configure button or make double click on line. Assemblies selection window will be opened. Mark checkboxes that are located before assemblies about what you want code coverage reports and apply settings. Save your project and close Visual Studio. Run Visual Studio as Administrator and run tests. NB! Select Test => Run => Tests in Current Context from menu. When tests are run you can open code coverage results by selecting Test => Windows => Code Coverage Results from menu. Here you can see my example test results. Visual Studio 2010 Test Results window. All my tests passed this time. :) Click on image to see it at original size.  And here are the code coverage results. Visual Studio 2101 Code Coverage Results. I need a lot more tests for sure. Click on image to see it at original size.  As you can see everything was pretty simple. But it took me sometime to figure out how to get everything work as expected. Problems? You may face some problems when making code coverage work. Here is my short list of possible problems. Make sure you have all assemblies available for code coverage. In some cases it needs more libraries to be referenced as you currently have. By example, I had to add some more Enterprise Library assemblies to my project. You can use EventViewer to discover errors that where given during testing. Make sure you selected all testable assemblies from Code Coverage settings like shown above. Otherwise you may get empty results. Tests with code coverage are slower because we need ASP.NET profiler. If your machine slows down then try to free more resources.

    Read the article

  • Ubuntu 12.04 patched b43 driver compilation error

    - by Zed
    I tried this How do I install this patched b43 driver? guide to install patched b43 driver on Ubuntu 12.04 with 3.2.0-31-generic kernel but I can't pass compilation phase.Here is what I did: wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.1/compat-wireless-3.1.1-1.tar.bz2 cd compat-wireless-3.1.1-1/ scripts/driver-select b43 make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.29.h:5:0, from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.h:24, from <command-line>:0: include/linux/netdevice.h:1153:5: warning: "IS_ENABLED" is not defined [-Wundef] include/linux/netdevice.h:1153:15: error: missing binary operator before token "(" include/linux/netdevice.h: In function ‘netdev_uses_dsa_tags’: include/linux/netdevice.h:1421:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1422:31: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h: In function ‘netdev_uses_trailer_tags’: include/linux/netdevice.h:1431:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1432:35: error: ‘struct net_device’ has no member named ‘dsa_ptr’ make[3]: *** [/home/marco/compat-wireless-3.1.1-1/compat/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/compat] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 To fix that error I added #include <linux/kconfig.h> to /usr/src/linux-headers-3.2.0-31-generic/include/linux/netdevice.h but now I'm getting something else make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o LD [M] /home/marco/compat-wireless-3.1.1-1/compat/compat.o CC [M] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:9:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h: In function ‘ssb_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: note: each undeclared identifier is reported only once for each function it appears in In file included from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h: In function ‘bcma_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:170:37: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c: At top level: /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:12:20: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:13:16: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: parameter names (without types) in function declaration [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: parameter names (without types) in function declaration [enabled by default] make[3]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 Any suggestion what to try next?

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Installation

    - by StuartBrierley
    As previously detailed, I have completed a single server installation of BizTalk Server 2009 standard on my development laptop; a MacBook Pro Core2Duo running at 2.16Ghz with 2Gb of RAM.  Following this I also posted on my use of the BizTalk Server Best Practices Anaylser and how to configure the BizTalk SQL Server Jobs.  All of which means that I should have some confidence that I have a decent working BizTalk Server 2009 environment, Next I thought that it would be a good idea to try and get some idea of how this setup performs by carrying out some baseline tests that can then be replicated on the test and live servers. The aim of this would be to allow confident predictions to be made of how any solutions developed on a single "server" installation may be expected to perform when deployed to these multi-server BizTalk Server 2009 standard installations. The BizTalk Benchmark Wizard would seem to be the perfect tool for the job. The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected. This utility should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment: "The executed scenarios may or may not be relative to any realistic scenario, and is only intended for testing. The BizTalk Benchmark Wizard has been developed in relation to the BizTalk Server 2009 Scale Out Testing Study. More information about the study can be found here: http://msdn.microsoft.com/en-us/library/ee377068(BTS.10).aspx" After downloading and installing the wizard you will need set up the Hosts, Instances and Adapter handlers.  This is done by running a script file using the “cscript” detailed below.  To do this you will need to open a command prompt window and navigate to the script folder; assuming the default installation location this would be C:\Program Files\Blogical\BizTalk Benchmark Wizard\Artefacts\BizTalk. In this folder you should find an InstallHosts.vbs file which can be executed using the following parameters: NTGroupName - The name of the Windows NT group. UserName – The name of the user account running the service instances. Password – The password of the user account running the service instances. Receive Host – The name of the server where you want to run the receive host instance.  Send Host - The name of the server where you want to run the sen host instance. Processing Host - The name of the server where you want to run the process host instance. By default the script is set up for 64 bit hosts, so if you are running in 32 bit environment make sure that you change the following line in the script before continuing: from:   objHS.IsHost32BitOnly = False to:    objHS.IsHost32BitOnly = True If you have a single box installation, your script command might look like this: cscript InstallHosts.vbs "BizTalk Application Users" “\MyUser” “MyPassword” “BtsServer1” “BtsServer1” “BtsServer1” If you have a multi server installation, your script command might look like this: cscript InstallHosts.vbs "MyDomain\BizTalk Application Users" “MyDomain\MyUser” “MyPassword” “BtsServer1” “BtsServer2” “BtsServer2” Running this script will create: Three hosts (BBW_RxHost, BBW_TxHost and BBW_PxHost) Three host instances One send and one receive adapter handler for the WCF NetTcp adapter. You will then need to import the BizTalk MSI via the BizTalk Administration Console.  Open the BizTalk Administration Console, point to the “Applications” node and import the BizTalk Benchmark Wizard.msi found in the same folder as the script above. This will create a “BizTalk Benchmark Wizard” application along with all ports and orchestrations needed. To finish the installation you will need to run the BizTalk Benchmark Wizard.msi on all BizTalk servers to add the assemblies to the Global Assembly Cache (GAC). Next I will look at running the BizTalk Benchmark Wizard.

    Read the article

  • Framework 4 Features: Support for Timed Jobs

    - by Anthony Shorten
    One of the new features of the Oracle Utilities Application Framework V4 is the ability for the batch framework to support Timed Batch. Traditionally batch is associated with set processing in the background in a fixed time frame. For example, billing customers. Over the last few versions their has been functionality required by the products required a more monitoring style batch process. The monitor is a batch process that looks for specific business events based upon record status or other pieces of data. For example, the framework contains a fact monitor (F1-FCTRN) that can be configured to look for specific status's or other conditions. The batch process then uses the instructions on the object to determine what to do. To support monitor style processing, you need to run the process regularly a number of times a day (for example, every ten minutes). Traditional batch could support this but it was not as optimal as expected (if you are a site using the old Workflow subsystem, you understand what I mean). The Batch framework was extended to add additional facilities to support times (and continuous batch which is another new feature for another blog entry). The new facilities include: The batch control now defines the job as Timed or Not Timed. Non-Timed batch are traditional batch jobs. The timer interval (the interval between executions) can be specified The timer can be made active or inactive. Only active timers are executed. Setting the Timer Active to inactive will stop the job at the next time interval. Setting the Timer Active to Active will start the execution of the timed job. You can specify the credentials, language to view the messages and an email address to send the a summary of the execution to. The email address is optional and requires an email server to be specified in the relevant feature configuration. You can specify the thread limits and commit intervals to be sued for the multiple executions. Once a timer job is defined it will be executed automatically by the Business Application Server process if the DEFAULT threadpool is active. This threadpool can be started using the online batch daemon (for non-production) or externally using the threadpoolworker utility. At that time any batch process with the Timer Active set to Active and Batch Control Type of Timed will begin executing. As Timed jobs are executed automatically then they do not appear in any external schedule or are managed by an external scheduler (except via the DEFAULT threadpool itself of course). Now, if the job has no work to do as the timer interval is being reached then that instance of the job is stopped and the next instance started at the timer interval. If there is still work to complete when the interval interval is reached, the instance will continue processing till the work is complete, then the instance will be stopped and the next instance scheduled for the next timer interval. One of the key ways of optimizing this processing is to set the timer interval correctly for the expected workload. This is an interesting new feature of the batch framework and we anticipate it will come in handy for specific business situations with the monitor processes.

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

  • Tuple in C# 4.0

    - by Jalpesh P. Vadgama
    C# 4.0 language includes a new feature called Tuple. Tuple provides us a way of grouping elements of different data type. That enables us to use it a lots places at practical world like we can store a coordinates of graphs etc. In C# 4.0 we can create Tuple with Create method. This Create method offer 8 overload like following. So you can group maximum 8 data types with a Tuple. Followings are overloads of a data type. Create(T1)- Which represents a tuple of size 1 Create(T1,T2)- Which represents a tuple of size 2 Create(T1,T2,T3) – Which represents a tuple of size 3 Create(T1,T2,T3,T4) – Which represents a tuple of size 4 Create(T1,T2,T3,T4,T5) – Which represents a tuple of size 5 Create(T1,T2,T3,T4,T5,T6) – Which represents a tuple of size 6 Create(T1,T2,T3,T4,T5,T6,T7) – Which represents a tuple of size 7 Create(T1,T2,T3,T4,T5,T6,T7,T8) – Which represents a tuple of size 8 Following are some example code for tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple); var t = System.Tuple.Create<int, string>(1, "Jalpesh"); Console.WriteLine(t); } } } Following is a output of above as expected. You can also access values insides Tuple with ItemN property. Where N represents particular number of item in tuple. Following is an example of it. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create<string, string, string>("Jalpesh", "P", "Vadgama"); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Here you can see I have printed items with Item1,Item2 and Item3 . Following is the output of above code.   Even we can create a nested tuple also following is code for nested tuple. using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace TupleExample { class Program { static void Main(string[] args) { var tuple = System.Tuple.Create(1,"Jalpesh",new Tuple<string,string>("P","Vadgama")); Console.WriteLine(tuple.Item1); Console.WriteLine(tuple.Item2); Console.WriteLine(tuple.Item3); } } } Following is a output above code as expected. As you can see there are unlimited possibilities we can do lots of things with Tuple. Hope you liked it. Stay tuned for more. Till then Happy Programming!!

    Read the article

  • A SharePoint Developer&rsquo;s Toolchest

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). When we develop for SharePoint, we end up using many tools, third party or Microsoft, to facilitate our development. What are some of your favorite tools? Mine are as below - 1. Reflector: When I saw reflector, I was pretty convinced that a tool better and more useful than it doesn’t exist. Well I was wrong! Redgate took over reflector and they still offer it as a free version, but they have a paid version called reflector pro. It lets you debug third party source code, as if you had the source code. Brilliant! Who needs documentation anymore when you have real code? 2. ULS Viewer: It is no secret, reading ULS logs is a pain in the rear. Well, not so with ULS Viewer, which does work with SharePoint 2007 as well. But it’s just way cooler with SharePoint 2010. You know when you get an error in SharePoint 2010 it shows you an error like as below: Well, the ULS Viewer will allow you to set filtering critereon, allowing you to immediately zero in, into an error, across multiple WFEs even. Also there are numerous other facilities built into the tool, such as advanced filtering, critical error notifications, etc. A must have! You can read the documentation of the ULSViewer here. 3. SPDisposeCheck: Did you know that the MySite object is strange? What is strange about it? That you have to dispose it even if you didn’t create it!? Well who the hell remembers all that! Honestly I do! And you should too. But there is a tool to help you sanitize your code. And that is SPDisposeCheck. You run it against your DLL or EXE, and it will give you suggestions on where you might have missed calling dispose on an object. You still have to use your head, but having this tool helps. 4. DebugView: Debugging for SharePoint can be difficult sometimes. Sometimes your breakpoints don’t get hit. And while you can try and make them hit, it is sometimes easier to just write a bunch of Debug.WriteLines, and catch them from an external application such as DebugView. You simply use your code, and DebugView will catch all the Debug.WriteLine’s in your code like this - 5. BGInfo: One annoying thing about SharePoint projects, it causes the number of servers to multiply like bunnies. As I’m RDP’ing into many computers trying to diagnose a crazy issue, sometimes it becomes hard to remember which machine is which. BGInfo puts all that on the wallpaper, alongwith a bunch of other useful info. A bit like this - 5. WSPBuilder: SharePoint 2007 only, but I think there maybe a version for SP2010 coming later. I think the VS2010 tools for SP2010 development are quite nice, so WSPBuilder, well so far I don’t miss it. But lets see what WSPBuilder for 2010 brings – I haven’t seen it yet. However, I want to confidently assert that WSPBuilder for SP2007 is simply awesome. 6. SharePoint Manager: The SharePoint Manager 2010 is a SharePoint object model explorer. It enables you to browse every site on the local farm and view every property. It also enables you to change the properties. The VS2010 dev tools now include a server explorer, which show you a subset of properties in read-only. I would LOVE to see SharePoint manager like functionality built into VS2010. SharePoint Manager, a total must-have. Comment on the article ....

    Read the article

  • Calling XAI Inbound Services from Oracle BI Publisher

    - by ACShorten
    Note: This technique requires Oracle BI Publisher 1.1.3.4.1 which supports Service Complex Types. Web Services require credentials for authentication. Note: The deafults for the product installation are used in this article. If your site uses alternative values then substitute those alternatives where applicable. Note: Examples shown in this article are examples for illustrative purposes only. When building a report in Oracle BI Publisher it may be necessary to call an XAI Inbound Service to get information via the object rather than directly calling the database tables for various reasons: The CLOB fields used in the Object are accessible for a report. Note: CLOB fields cannot be used as criteria in the current release. Objects can take advantage of algorithms to format or calculate additional data that is not stored in the database directly. For example, Information format strings can automatically generated by the object which gives consistent information between a report and the online screens. To use this facility the following process must be performed: Ensure that the product group, cisusers by default, is enabled for the SPLServiceBean in the console. This allows BI Publisher access to call Web Services directly. To ensure this follow the instructions below: Logon to the Oracle WebLogic server console using an appropriate administrator account. By default the user system or weblogic is provided for this purpose. Navigate to the Security Realms section and select your configured realm. This is set to myrealm by default. In the Roles and Policies section, expand the SPLService section of the Deployments option to reveal the SPLServiceBean roles. If there is no role associated with the SPLServiceBean, create a new EJB role and specify the cisusers role, by default. For example:   Add a Role Condition to the role just created, with a Predicate List of Group and specify cisusers as the Group Argument Name. For example: Save all your changes. The XAI Inbound Services to be used by BI Publisher must be defined prior to using the interface. Refer to the XAI Best Practices (Doc Id: 942074.1) from My Oracle Support or via the online help for more information about this process. Inside BI Publisher create your report, according to the BI Publisher documentation. When specifying the dataset, under the Data Model Report option, specify the following to use an XAI Inbound Service as a data source: Parameter Comment Type Web Service Complex Type true Username Any valid user name within the product. This user MUST have security access to the objects referenced in the XAI Inbound Service Password Authentication password for Username Timeout Timeout, in seconds, set for the Web Service call. For example 60 seconds. WSDL URL Use the WSDL URL on the XAI Inbound Service definition as your WSDL URL. It will be in the following format by default:http://<host>:<port>/<server>/XAIApp/xaiserver/<service>?WSDLwhere: <host> - Host Name of Web Application Server <port> - Port allocated to Web Application Server for product access <server> - Server context for server <service> - XAI Inbound Service Name Note: For customers using secure transmission should substitute https instead of http and use the HTTPS port allocated to the product at installation time. Web Service Select the name of the service that shows in the drop-down menu. If no service name shows up, it means that Publisher could not establish a connection with the server or WSDL name provided in the above URL in order to get the service name. See BI Publisher server log for more information. Method Select the name of the Method that shows in the drop-down menu. A method name should show in the Method drop-down menu once the Web Service name is selected. For example: Additionally, filters can be used from the Web Service that can be generated, required or optional, from the WSDL in the Parameter List. For example:

    Read the article

  • Review: ComponentOne Studio for Entity Framework

    - by Tim Murphy
    While I have always been a fan of libraries that improve coding efficiency and reduce code redundancy I have mostly been using ones that were in the public domain.  As part of the Geeks With Blogs Influencers program a got my hands on ComponentOne’s Studio for Entity Framework.  Below are my thought after working with the product for several weeks. My coding preference has always been maintainable code that is reusable across an enterprises protfolio.  Because of this my focus in reviewing this product is less on the RAD components and more on its benefits for layered applications using code first Entity Framework. Before we get into the pros and cons here is a summary of the main feature listed for SEF. Unified Data Context Virtual Data Access More Powerful Data Binding Pros The first thing that I found to my liking is the C1DataSource. It basically manages a cache for your Entity Model context.  Under RAD conditions this is setup automatically when you drop the object on a your design surface.  If you are like me and want to abstract you data management into a library it takes a little more work, but it is still acceptable and gains the same benefits. The second feature that I found beneficial is the definition of views with improved sorting and filtering.  Again the ease of use of these features is greater on the RAD side but no capabilities are missing when manipulating object in code. Linq has become my friend over the last couple of years and it was great to see that ComponentOne had ensured that it remained a first class citizen in their design.  When you look into this product yourself I would suggest taking a dive into LiveLinq which allow the joining of different data source types. As I went through discovering the features of this framework I appreciated the number of examples that they supplied for different uses.  Besides showing how to use SEF with WinForms, WPF and Silverlight they also showed how to accomplish tasks both RAD, code only and MVVM approaches. Cons The only area that I would really like to see improvement is in there level of detail in their documentation.  Specifically I would like to have seen some of the supporting code explained, such as what some supporting object did, in the examples instead of having to go to the programmer’s reference. I did find some times where currently existing projects had some trouble determining scope that the RAD controls were allowed, but I expect this is something that is in part end user related. Summary Overall I found the Studio for Entity Framework capable and well thought out.  If you are already using the Entity Framework this product will fit into your environment with little effort in return for greater flexibility and greater robustness in your solutions. Whether the $895 list price for a standard version works for you will depend on your return on investment. Smaller companies with only a small number of projects may not be able to stomach it, you get a full featured product that is supported by a well established company.  The more projects and the more code you have the greater your return on investment will be. Personally I intend to apply this product to some production systems and will probably have some tips and tricks in the future. del.icio.us Tags: ComponentOne,Studio for Entity Framework,Geeks With Blogs,Influencers,Product Reviews

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Running a Test

    - by StuartBrierley
    The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected.  It should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment. For details on installing the Benchmark Wizard see my previous post. The BizTalk Benchmarking Wizard provides two simple test scenarios, one for messaging and one for Orchestrations, which can be used to test your BizTalk implementation. Messaging Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The PassThruReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The WCF One-Way Send Port, which is the only subscriber to the message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Orchestrations Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The XMLReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The message is delivered to a simple Orchestration which consists of a receive location and a send port The WCF One-Way Send Port, which is the only subscriber to the Orchestration message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Below is a quick outline of how to run the BizTalk Benchmark Wizard on a single server, although it should be noted that this is not ideal as this server is then both generating and processing the load.  In order to separate this load out you should run the "Indigo" service on a seperate server. To start the BizTalk Benchmark Wizard click Start > All Programs > BizTalk Benchmark Wizard > BizTalk Benchmark Wizard. On this screen click next, you will then get the following pop up window. Check the server and database names and check the "check prerequsites" check-box before pressing ok.  The wizard will then check that the appropriate test scenarios are installed. You should then choose the test scenario that wish to run (messaging or orchestration) and the architecture that most closely matches your environment. You will then be asked to confirm the host server for each of the host instances. Next you will be presented with the prepare screen.  You will need to start the indigo service before pressing the Test Indigo Service Button. If you are running the indigo service on a separate server you can enter the server name here.  To start the indigo service click Start > All Programs > BizTalk Benchmark Wizard > Start Indigo Service.   While the test is running you will be presented with two speed dial type displays - one for the received messages per second and one for the processed messages per second. The green dial shows the current rate and the red dial shows the overall average rate.  Optionally you can view the CPU usage of the various servers involved in processing the tests. For my development environment I expected low results and this is what I got.  Although looking at the online high scores table and comparing to the quad core system listed, the results are perhaps not really that bad. At some time I may look at what improvements I can make to this score, but if you are interested in that now take a look at Benchmark your BizTalk Server (Part 3).

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • Does my use of the strategy pattern violate the fundamental MVC pattern in iOS?

    - by Goodsquirrel
    I'm about to use the 'strategy' pattern in my iOS app, but feel like my approach violates the somehow fundamental MVC pattern. My app is displaying visual "stories", and a Story consists (i.e. has @properties) of one Photo and one or more VisualEvent objects to represent e.g. animated circles or moving arrows on the photo. Each VisualEvent object therefore has a eventType @property, that might be e.g. kEventTypeCircle or kEventTypeArrow. All events have things in common, like a startTime @property, but differ in the way they are being drawn on the StoryPlayerView. Currently I'm trying to follow the MVC pattern and have a StoryPlayer object (my controller) that knows about both the model objects (like Story and all kinds of visual events) and the view object StoryPlayerView. To chose the right drawing code for each of the different visual event types, my StoryPlayer is using a switch statement. @implementation StoryPlayer // (...) - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { switch (event.eventType) { case kEventTypeCircle: [self showCircleEvent:event onStoryPlayerView:storyPlayerView]; break; case kEventTypeArrow: [self showArrowDrawingEvent:event onStoryPlayerView:storyPlayerView]; break; // (...) } But switch statements for type checking are bad design, aren't they? According to Uncle Bob they lead to tight coupling and can and should almost always be replaced by polymorphism. Having read about the "Strategy"-Pattern in Head First Design Patterns, I felt this was a great way to get rid of my switch statement. So I changed the design like this: All specialized visual event types are now subclasses of an abstract VisualEvent class that has a showOnStoryPlayerView: method. @interface VisualEvent : NSObject - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView; // abstract Each and every concrete subclass implements a concrete specialized version of this drawing behavior method. @implementation CircleVisualEvent - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView { [storyPlayerView drawCircleAtPoint:self.position color:self.color lineWidth:self.lineWidth radius:self.radius]; } The StoryPlayer now simply calls the same method on all types of events. @implementation StoryPlayer - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { [event showOnStoryPlayerView:storyPlayerView]; } The result seems to be great: I got rid of the switch statement, and if I ever have to add new types of VisualEvents in the future, I simply create new subclasses of VisualEvent. And I won't have to change anything in StoryPlayer. But of cause this approach violates the MVC pattern since now my model has to know about and depend on my view! Now my controller talks to my model and my model talks to the view calling methods on StoryPlayerView like drawCircleAtPoint:color:lineWidth:radius:. But this kind of calls should be controller code not model code, right?? Seems to me like I made things worse. I'm confused! Am I completely missing the point of the strategy pattern? Is there a better way to get rid of the switch statement without breaking model-view separation?

    Read the article

  • Doing your first mock with JustMock

    - by mehfuzh
    In this post, i will start with a  more traditional mocking example that  includes a fund transfer scenario between two different currency account using JustMock.Our target interface that we will be mocking looks similar to: public interface ICurrencyService {     float GetConversionRate(string fromCurrency, string toCurrency); } Moving forward the SUT or class that will be consuming the  service and will be invoked by user [provided that the ICurrencyService will be passed in a DI style] looks like: public class AccountService : IAccountService         {             private readonly ICurrencyService currencyService;               public AccountService(ICurrencyService currencyService)             {                 this.currencyService = currencyService;             }               #region IAccountService Members               public void TransferFunds(Account from, Account to, float amount)             {                 from.Withdraw(amount);                 float conversionRate = currencyService.GetConversionRate(from.Currency, to.Currency);                 float convertedAmount = amount * conversionRate;                 to.Deposit(convertedAmount);             }               #endregion         }   As, we can see there is a TransferFunds action implemented from IAccountService  takes in a source account from where it withdraws some money and a target account to where the transfer takes place using the provided conversion rate. Our first step is to create the mock. The syntax for creating your instance mocks is pretty much same and  is valid for all interfaces, non-sealed/sealed concrete instance classes. You can pass in additional stuffs like whether its an strict mock or not, by default all the mocks in JustMock are loose, you can use it as default valued objects or stubs as well. ICurrencyService currencyService = Mock.Create<ICurrencyService>(); Using JustMock, setting up your expectations and asserting them always goes with Mock.Arrang|Assert and this is pretty much same syntax no matter what type of mocking you are doing. Therefore,  in the above scenario we want to make sure that the conversion rate always returns 2.20F when converting from GBP to CAD. To do so we need to arrange in the following way: Mock.Arrange(() => currencyService.GetConversionRate("GBP", "CAD")).Returns(2.20f).MustBeCalled(); Here, I have additionally marked the mock call as must. That means it should be invoked anywhere in the code before we do Mock.Assert, we can also assert mocks directly though lamda expressions  but the more general Mock.Assert(mocked) will assert only the setups that are marked as "MustBeCalled()”. Now, coming back to the main topic , as we setup the mock, now its time to act on it. Therefore, first we create our account service class and create our from and to accounts respectively. var accountService = new AccountService(currencyService);   var canadianAccount = new Account(0, "CAD"); var britishAccount = new Account(0, "GBP"); Next, we add some money to the GBP  account: britishAccount.Deposit(100); Finally, we do our transfer by the following: accountService.TransferFunds(britishAccount, canadianAccount, 100); Once, everything is completed, we need to make sure that things were as it is we have expected, so its time for assertions.Here, we first do the general assertions: Assert.Equal(0, britishAccount.Balance); Assert.Equal(220, canadianAccount.Balance); Following, we do our mock assertion,  as have marked the call as “MustBeCalled” it will make sure that our mock is actually invoked. Moreover, we can add filters like how many times our expected mock call has occurred that will be covered in coming posts. Mock.Assert(currencyService); So far, that actually concludes our  first  mock with JustMock and do stay tuned for more. Enjoy!!

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >