Search Results

Search found 6651 results on 267 pages for 'description'.

Page 243/267 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Using SSIS to send a HTML E-Mail Message with built-in table of Counts.

    - by Kevin Shyr
    For the record, this can be just as easily done with a .NET class with a DLL call.  The two major reasons for this ending up as a SSIS package are: There are a lot of SQL resources for maintenance, but not as many .NET developers. There is an existing automated process that links up SQL Jobs (more on that in the next post), and this is part of that process.   To start, this is what the SSIS looks like: The first part of the control flow is just for the override scenario.   In the Execute SQL Task, it calls a stored procedure, which already formats the result into XML by using "FOR XML PATH('Row'), ROOT(N'FieldingCounts')".  The result XML string looks like this: <FieldingCounts>   <Row>     <CellId>M COD</CellId>     <Mailed>64</Mailed>     <ReMailed>210</ReMailed>     <TotalMail>274</TotalMail>     <EMailed>233</EMailed>     <TotalSent>297</TotalSent>   </Row>   <Row>     <CellId>M National</CellId>     <Mailed>11</Mailed>     <ReMailed>59</ReMailed>     <TotalMail>70</TotalMail>     <EMailed>90</EMailed>     <TotalSent>101</TotalSent>   </Row>   <Row>     <CellId>U COD</CellId>     <Mailed>91</Mailed>     <ReMailed>238</ReMailed>     <TotalMail>329</TotalMail>     <EMailed>291</EMailed>     <TotalSent>382</TotalSent>   </Row>   <Row>     <CellId>U National</CellId>     <Mailed>63</Mailed>     <ReMailed>286</ReMailed>     <TotalMail>349</TotalMail>     <EMailed>374</EMailed>     <TotalSent>437</TotalSent>   </Row> </FieldingCounts>  This result is saved into an internal SSIS variable with the following settings on the General tab and the Result Set tab:   Now comes the trickier part.  We need to use the XML Task to format the XML string result into an HTML table, and I used Direct input XSLT And here is the code of XSLT: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes"/>   <xsl:template match="/ROOT">         <table border="1" cellpadding="6">           <tr>             <td></td>             <td>Mailed</td>             <td>Re-mailed</td>             <td>Total Mail (Mailed, Re-mailed)</td>             <td>E-mailed</td>             <td>Total Sent (Mailed, E-mailed)</td>           </tr>           <xsl:for-each select="FieldingCounts/Row">             <tr>               <xsl:for-each select="./*">                 <td>                   <xsl:value-of select="." />                 </td>               </xsl:for-each>             </tr>           </xsl:for-each>         </table>   </xsl:template> </xsl:stylesheet>    Then a script task is used to send out an HTML email (as we are all painfully aware that SSIS Send Mail Task only sends plain text) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 using System; using System.Data; using Microsoft.SqlServer.Dts.Runtime; using System.Windows.Forms; using System.Net.Mail; using System.Net;   namespace ST_b829a2615e714bcfb55db0ce97be3901.csproj {     [System.AddIn.AddIn("ScriptMain", Version = "1.0", Publisher = "", Description = "")]     public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase     {           #region VSTA generated code         enum ScriptResults         {             Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,             Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure         };         #endregion           public void Main()         {             String EmailMsgBody = String.Format("<HTML><BODY><P>{0}</P><P>{1}</P></BODY></HTML>"                                                 , Dts.Variables["Config_SMTP_MessageSourceText"].Value.ToString()                                                 , Dts.Variables["InternalStr_CountResultAfterXSLT"].Value.ToString());             MailMessage EmailCountMsg = new MailMessage(Dts.Variables["Config_SMTP_From"].Value.ToString().Replace(";", ",")                                                         , Dts.Variables["Config_SMTP_Success_To"].Value.ToString().Replace(";", ",")                                                         , Dts.Variables["Config_SMTP_SubjectLinePrefix"].Value.ToString() + " " + Dts.Variables["InternalStr_FieldingDate"].Value.ToString()                                                         , EmailMsgBody);             //EmailCountMsg.From.             EmailCountMsg.CC.Add(Dts.Variables["Config_SMTP_Success_CC"].Value.ToString().Replace(";", ","));             EmailCountMsg.IsBodyHtml = true;               SmtpClient SMTPForCount = new SmtpClient(Dts.Variables["Config_SMTP_ServerAddress"].Value.ToString());             SMTPForCount.Credentials = CredentialCache.DefaultNetworkCredentials;               SMTPForCount.Send(EmailCountMsg);               Dts.TaskResult = (int)ScriptResults.Success;         }     } } Note on this code: notice the email list has Replace(";", ",").  This is only here because the list is configurable in the SQL Job Step at Set Values, which does not react well with colons as email separator, but system.Net.Mail only handles comma as email separator, hence the extra replace in the string. The result is a nicely formatted email message with count information:

    Read the article

  • Persisting Session Between Different Browser Instances

    - by imran_ku07
        Introduction:          By default inproc session's identifier cookie is saved in browser memory. This cookie is known as non persistent cookie identifier. This simply means that if the user closes his browser then the cookie is immediately removed. On the other hand cookies which stored on the user’s hard drive and can be reused for later visits are called persistent cookies. Persistent cookies are less used than nonpersistent cookies because of security. Simply because nonpersistent cookies makes session hijacking attacks more difficult and more limited. If you are using shared computer then there are lot of chances that your persistent session will be used by other shared members. However this is not always the case, lot of users desired that their session will remain persisted even they open two instances of same browser or when they close and open a new browser. So in this article i will provide a very simple way to persist your session even the browser is closed.   Description:          Let's create a simple ASP.NET Web Application. In this article i will use Web Form but it also works in MVC. Open Default.aspx.cs and add the following code in Page_Load.    protected void Page_Load(object sender, EventArgs e)        {            if (Session["Message"] != null)                Response.Write(Session["Message"].ToString());            Session["Message"] = "Hello, Imran";        }          This page simply shows a message if a session exist previously and set the session.          Now just run the application, you will just see an empty page on first try. After refreshing the page you will see the Message "Hello, Imran". Now just close the browser and reopen it or just open another browser instance, you will get the exactly same behavior when you run your application first time . Why the session is not persisted between browser instances. The simple reason is non persistent session cookie identifier. The session cookie identifier is not shared between browser instances. Now let's make it persistent.          To make your application share session between different browser instances just add the following code in global.asax.    protected void Application_PostMapRequestHandler(object sender, EventArgs e)           {               if (Request.Cookies["ASP.NET_SessionIdTemp"] != null)               {                   if (Request.Cookies["ASP.NET_SessionId"] == null)                       Request.Cookies.Add(new HttpCookie("ASP.NET_SessionId", Request.Cookies["ASP.NET_SessionIdTemp"].Value));                   else                       Request.Cookies["ASP.NET_SessionId"].Value = Request.Cookies["ASP.NET_SessionIdTemp"].Value;               }           }          protected void Application_PostRequestHandlerExecute(object sender, EventArgs e)        {             HttpCookie cookie = new HttpCookie("ASP.NET_SessionIdTemp", Session.SessionID);               cookie.Expires = DateTime.Now.AddMinutes(Session.Timeout);               Response.Cookies.Add(cookie);         }          This code simply state that during Application_PostRequestHandlerExecute(which is executed after HttpHandler) just add a persistent cookie ASP.NET_SessionIdTemp which contains the value of current user SessionID and sets the timeout to current user session timeout.          In Application_PostMapRequestHandler(which is executed just before th session is restored) we just check whether the Request cookie contains ASP.NET_SessionIdTemp. If yes then just add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp. So when a new browser instance is open, then a check will made that if ASP.NET_SessionIdTemp exist then simply add or update ASP.NET_SessionId cookie with ASP.NET_SessionIdTemp.          So run your application again, you will get the last closed browser session(if it is not expired).   Summary:          Persistence session is great way to increase the user usability. But always beware the security before doing this. However there are some cases in which you might need persistence session. In this article i just go through how to do this simply. So hopefully you will again enjoy this simple article too.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • CodePlex Daily Summary for Sunday, August 03, 2014

    CodePlex Daily Summary for Sunday, August 03, 2014Popular ReleasesBoxStarter: Boxstarter 2.4.76: Running the Setup.bat file will install Chocolatey if not present and then install the Boxstarter modules.GMare: GMare Beta 1.2: Features Added: - Instance painting by holding the alt key down while pressing the left mouse button - Functionality to the binary exporter so that backgrounds from image files can be used - On the binary exporter background information can be edited manually now - Update to the GMare binary read GML script - Game Maker Studio export - Import from GMare project. Multiple options to import desired properties of a .gmpx - 10 undo/redo levels instead of 5 is now the default - New preferences dia...Json.NET: Json.NET 6.0 Release 4: New feature - Added Merge to LINQ to JSON New feature - Added JValue.CreateNull and JValue.CreateUndefined New feature - Added Windows Phone 8.1 support to .NET 4.0 portable assembly New feature - Added OverrideCreator to JsonObjectContract New feature - Added support for overriding the creation of interfaces and abstract types New feature - Added support for reading UUID BSON binary values as a Guid New feature - Added MetadataPropertyHandling.Ignore New feature - Improv...SQL Server Dialog: SQL Server Dialog: Input server, user and password Show folder and file in treeview Customize icon Filter file extension Skip system generate folder and fileAitso-a platform for spatial optimization and based on artificial immune systems: Aitso_0.14.08.01: Aitso0.14.08.01Installer.zipVidCoder: 1.5.24 Beta: Added NL-Means denoiser. Updated HandBrake core to SVN 6254. Added extra error handling to DVD player code to avoid a crash when the player was moved.AutoUpdater.NET : Auto update library for VB.NET and C# Developer: AutoUpdater.NET 1.3: Fixed problem in DownloadUpdateDialog where download continues even if you close the dialog. Added support for new url field for 64 bit application setup. AutoUpdater.NET will decide which download url to use by looking at the value of IntPtr.Size. Added German translation provided by Rene Kannegiesser. Now developer can handle update logic herself using event suggested by ricorx7. Added italian translation provided by Gianluca Mariani. Fixed bug that prevents Application from exiti...SEToolbox: SEToolbox 01.041.012 Release 1: Added voxel material textures to read in with mods. Fixed missing texture replacements for mods. Fixed rounding issue in raytrace code. Fixed repair issue with corrupt checkpoint file. Fixed issue with updated SE binaries 01.041.012 using new container configuration.Magick.NET: Magick.NET 6.8.9.601: Magick.NET linked with ImageMagick 6.8.9.6 Breaking changes: - Changed arguments for the Map method of MagickImage. - QuantizeSettings uses Riemersma by default.Multiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterAricie Shared: Aricie.Shared Version 1.8.00: Version 1.8.0 - Release Notes New: Expression Builder to design Flee Expressions New: Cryptographic helpers and configuration classes Improvement: Many fixes and improvements with property editor Improvement: Token Replace Property explorer now has a restricted mode for additional security Improvement: Better variables, types and object manipulation Fixed: smart file and flee bugs Fixed: Removed Exception while trying to read unsuported files Improvement: several performance twe...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).VG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityNew ProjectsCreek: Creek is a Collection of many C# Frameworks and my ownSpeaking Speedometer (android): Simple speaking speedometerT125Protocol { Alpha version }: implement T125 Protocol for communicate with a mainframe.Unix Time: This library provides a System.UnixTime as a new Type providing conversion between Unix Time and .NET DateTime.

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • Oracle SOA Suite - Highlighted Travel and Transportation Customer References

    - by Bruce Tierney
    0 0 1 1137 6483 - 54 15 7605 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Next in this series on industry-specific highlights of Oracle SOA Suite customers is the Travel and Transportation industry.  If you are in the travel or transportation industry, take a look at how these Oracle SOA Suite integration customers have addressed common business requirements to enable better customer service, lower costs, and deliver new business services. For example, All Nippon Airways (ANA) has significantly lowered management costs associated with their hybrid on-premise/cloud ticketing system deployments for domestic and international flights. Their lead-time for changes or new applications has been greatly reduced compared to their old mainframe-based systems, enabling ANA to rapidly develop new services in response to changing market needs. Another example is Schneider National, a leading provider of truckload logistics, and how they have integrated Oracle E-Business Suite, Siebel CRM, Oracle Transportation Management and customers applications using Oracle SOA Suite. Schneider National has 400 BPEL processes that generate over 60 million composite instances over five SOA clusters.  Take a deeper look into any of these case studies, videos, and Oracle Magazine articles that closely align with your industry:  Customers fly and airline succeeds with an IT transformation. Company:  All Nippon Airways  Customer Oracle or Profit Magazine Article   |   Travel and Transportation   |   Published on January 06, 2014 Any successful business must ensure ongoing customer satisfaction, respond to increased competition, and minimize costs. Running a successful airline in today’s economic climate requires all of those things, as well a... Openmatics Revolutionizes Fleet Management with Standards-Based Vehicle Telematics Platform New Company:  Openmatics s.r.o.  Customer Snapshot   |   Automotive   |   Published on May 20, 2014 Openmatics uses Oracle WebCenter Portal and Oracle Application Development Framework as a foundation for Openmatics, a vehicle telematics service for next-generation fleet management. It integrated its own app shop wi... Future Proof: To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate Company:  SFpark  Customer Oracle or Profit Magazine Article   |   Professional Services   |   Published on August 01, 2012 Oracle Fusion Middleware is at the heart of a recently completed and very ambitious project to change how people handle the challenge of finding a parking space in San Francisco, California. “Parking is a universal is... Globalia Corporación Empresarial Accelerates Hotel Bookings, Boosts Sales by 40% with In-Memory Data Grid Solution Company:  Globalia Corporación Empresarial S.A.  Customer Snapshot   |   Travel and Transportation   |   Published on April 29, 2013 Globalia Corporación Empresarial S.A. deployed Oracle Coherence to reengineer the group’s core system for hotel bookings, now serving booking requests involving 80 hotels within an average response time of 100 millise... Choice Hotels Uses Oracle SOA Suite and Oracle BPM Suite to Modernize Global IT Architecture Company:  Choice Hotels  Press Release   |   Travel and Transportation   |   Published on August 07, 2012 Choice Hotels International, one of the largest and most successful hotel franchises in the world, has implemented Oracle SOA Suite and Oracle BPM Suite. Sascar Consolidates Fleet Management Infrastructure and Accelerates Customers’ Data Access Company:  Sascar  Customer Case Study   |   Travel and Transportation   |   Published on February 07, 2014 Description – Sascar used Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud and Oracle WebLogic Suite 11g to consolidate fleet management and perform real-time vehicle tracking 4x faster. Directorate General of Civil Aviation Streamlines Key Aviation Applications Access, Improves Productivity and Reduces Maintenance Costs Company:  Directorate General of Civil Aviation (DGAC)  Customer Snapshot   |   Travel and Transportation   |   Published on May 24, 2013 With Oracle Fusion Middleware, the Directorate General of Civil Aviation (DGAC) provided its 12,500 employees a virtual office environment that integrates team workspaces, business applications, and e-mails within a n... Schneider National Implements Next-Generation IT Infrastructure to Continue Leadership in Transportation and Logistics Industry Company:  Schneider National, Inc.  Customer Snapshot   |   Travel and Transportation   |   Published on February 26, 2013 Schneider National, Inc. deployed Oracle applications, Oracle Fusion Middleware, and Oracle development tools as the foundation for its next-generation IT environment, which is driving new levels of efficiency, profit... DGAC Cuts Subscription Costs with Oracle Company:  DGAC  Video   |   Travel and Transportation   |   Published on October 31, 2012 Using Oracle WebCenter Portal, Oracle SOA Suite, and Oracle Exalogic, DGAC reduces the cost of subscriptions to newsletters and provide to its 12,500 employees a collaborative workspace portal. Asiana Airlines Builds PIP System with Oracle Solutions Company:  Asiana Airlines  Video   |   Travel and Transportation   |   Published on July 26, 2012 With Oracle Exalogic and the Oracle SOA Suite, Asiana Airlines builds a passenger service integrated platform providing various services such as integration between its interface and internal systems and a data wareho... Choice Hotels Reduces Time to Market with Oracle WebCenter Company:  Choice Hotels  Video   |   Travel and Transportation   |   Published on April 11, 2014 Using Oracle WebCenter and Oracle SOA standardization, Choice Hotels consolidated multiple platforms, reduced IT dependency and realized tremendous benefits in total cost of ownership and faster time to market support... An Interview with Schneider National's Judy Lemke Company:  Schneider National  Video   |   Travel and Transportation   |   Published on December 17, 2013 Judy Lemke talks with Mark Sunday about the challenges Schneider National faced and how they overcame them through a companywide transformational change. For more details on these case studies, you can use this pre-filtered search on “Travel and Transportation” / “Middleware” / “Service Oriented Architecture” or browse on your own at www.oracle.com/customers

    Read the article

  • SpriteFont Exception, no such character?

    - by Michal Bozydar Pawlowski
    I have such spriteFont: <?xml version="1.0" encoding="utf-8"?> <!-- This file contains an xml description of a font, and will be read by the XNA Framework Content Pipeline. Follow the comments to customize the appearance of the font in your game, and to change the characters which are available to draw with. --> <XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"> <Asset Type="Graphics:FontDescription"> <!-- Modify this string to change the font that will be imported. --> <FontName>Segoe UI</FontName> <!-- Size is a float value, measured in points. Modify this value to change the size of the font. --> <Size>20</Size> <!-- Spacing is a float value, measured in pixels. Modify this value to change the amount of spacing in between characters. --> <Spacing>0</Spacing> <!-- UseKerning controls the layout of the font. If this value is true, kerning information will be used when placing characters. --> <UseKerning>true</UseKerning> <!-- Style controls the style of the font. Valid entries are "Regular", "Bold", "Italic", and "Bold, Italic", and are case sensitive. --> <Style>Regular</Style> <!-- If you uncomment this line, the default character will be substituted if you draw or measure text that contains characters which were not included in the font. --> <!-- <DefaultCharacter>*</DefaultCharacter> --> <!-- CharacterRegions control what letters are available in the font. Every character from Start to End will be built and made available for drawing. The default range is from 32, (ASCII space), to 126, ('~'), covering the basic Latin character set. The characters are ordered according to the Unicode standard. See the documentation for more information. --> <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#32;</Start> <End>&#1200;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> It has the character regions (32-1200) And I get this exception: A first chance exception of type 'System.ArgumentException' occurred in Microsoft.Xna.Framework.Graphics.ni.dll The character '?' (0x0441) is not available in this SpriteFont. If applicable, adjust the font's start and end CharacterRegions to include this character. Parameter name: character Why? I'm drawing the string like this: spriteBatch.DrawString(font24, zasadyText, zasadyTextPos, kolorCzcionki1, -0.05f, Vector2.Zero, 1.0f, SpriteEffects.None, 0.5f) I even changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception -- EDIT -- Ok, I got the solution. It was a letter mistake in language. I had this: if (jezyk == "ru_RU") { font14 = Content.Load<SpriteFont>("ru_font14"); font24 = Content.Load<SpriteFont>("ru_font24"); font12 = Content.Load<SpriteFont>("ru_czcionkaFloty"); font10 = Content.Load<SpriteFont>("ru_font10"); font28 = Content.Load<SpriteFont>("ru_font28"); font20 = Content.Load<SpriteFont>("ru_font20"); } else { font14 = Content.Load<SpriteFont>("font14"); font24 = Content.Load<SpriteFont>("font24"); font12 = Content.Load<SpriteFont>("czcionkaFloty"); font10 = Content.Load<SpriteFont>("font10"); font28 = Content.Load<SpriteFont>("font28"); font20 = Content.Load<SpriteFont>("font20"); } and there should be not "ru_RU" but "ru-RU" I have no idea. I changed the spriteFont to cyrillic: <CharacterRegions> <CharacterRegion> <Start>&#09;</Start> <End>&#09;</End> </CharacterRegion> <CharacterRegion> <Start>&#0032;</Start> <End>&#0383;</End> </CharacterRegion> <CharacterRegion> <Start>&#1040;</Start> <End>&#1111;</End> </CharacterRegion> </CharacterRegions> </Asset> </XnaContent> and it still doesn't work. I got the (0x441 = char) exception

    Read the article

  • Data-driven animation states

    - by user8363
    I'm trying to handle animations in a 2D game engine hobby project, without hard-coding them. Hard coding animation states seems like a common but very strange phenomenon, to me. A little background: I'm working with an entity system where components are bags of data and subsystems act upon them. I chose to use a polling system to update animation states. With animation states I mean: "walking_left", "running_left", "walking_right", "shooting", ... My idea to handle animations was to design it as a data driven model. Data could be stored in an xml file, a rdbms, ... And could be loaded at the start of a game / level/ ... This way you can easily edit animations and transitions without having to go change the code everywhere in your game. As an example I made an xml draft of the data definitions I had in mind. One very important piece of data would simply be the description of an animation. An animation would have a unique id (a descriptive name). It would hold a reference id to an image (the sprite sheet it uses, because different animations may use different sprite sheets). The frames per second to run the animation on. The "replay" here defines if an animation should be run once or infinitely. Then I defined a list of rectangles as frames. <animation id='WIZARD_WALK_LEFT'> <image id='WIZARD_WALKING' /> <fps>50</fps> <replay>true</replay> <frames> <rectangle> <x>0</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> <rectangle> <x>45</x> <y>0</y> <width>45</width> <height>45</height> </rectangle> </frames> </animation> Animation data would be loaded and held in an animation resource pool and referenced by game entities that are using it. It would be treated as a resource like an image, a sound, a texture, ... The second piece of data to define would be a state machine to handle animation states and transitions. This defines each state a game entity can be in, which states it can transition to and what triggers that state change. This state machine would differ from entity to entity. Because a bird might have states "walking" and "flying" while a human would only have the state "walking". However it could be shared by different entities because multiple humans will probably have the same states (especially when you define some common NPCs like monsters, etc). Additionally an orc might have the same states as a human. Just to demonstrate that this state definition might be shared but only by a select group of game entities. <state id='IDLE'> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_LEFT'> <event trigger='LEFT_UP' goto='IDLE' /> <event trigger='RIGHT_DOWN' goto='MOVING_RIGHT' /> </state> <state id='MOVING_RIGHT'> <event trigger='RIGHT_UP' goto='IDLE' /> <event trigger='LEFT_DOWN' goto='MOVING_LEFT' /> </state> These states can be handled by a polling system. Each game tick it grabs the current state of a game entity and checks all triggers. If a condition is met it changes the entity's state to the "goto" state. The last part I was struggling with was how to bind animation data and animation states to an entity. The most logical approach seemed to me to add a pointer to the state machine data an entity uses and to define for each state in that machine what animation it uses. Here is an xml example how I would define the animation behavior and graphical representation of some common entities in a game, by addressing animation state and animation data id. Note that both "wizard" and "orc" have the same animation states but a different animation. Also, a different animation could mean a different sprite sheet, or even a different sequence of animations (an animation could be longer or shorter). <entity name="wizard"> <state id="IDLE" animation="WIZARD_IDLE" /> <state id="MOVING_LEFT" animation="WIZARD_WALK_LEFT" /> </entity> <entity name="orc"> <state id="IDLE" animation="ORC_IDLE" /> <state id="MOVING_LEFT" animation="ORC_WALK_LEFT" /> </entity> When the entity is being created it would add a list of states with state machine data and an animation data reference. In the future I would use the entity system to build whole entities by defining components in a similar xml format. -- This is what I have come up with after some research. However I had some trouble getting my head around it, so I was hoping op some feedback. Is there something here what doesn't make sense, or is there a better way to handle these things? I grasped the idea of iterating through frames but I'm having trouble to take it a step further and this is my attempt to do that.

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • How To: Using SimpleMembserhipProvider with MySql Connector/Net.

    - by Francisco Tirado
    Now on Connector/Net 6.9 the users will have the ability to use SimpleMembership Provider on MVC4 templates. The configuration is very simple and also have compatibility with OAuth, in this post we'll explain step by step how to configure it in a MVC 4 Web Application. Requirements  The requirements to use SimpleMembership with Connector/Net are: Install Connector/Net 6.9, or download the No Install version. Net Framework 4.0 or greater. MVC 4  Visual Studio 2012 or newer version Creating and configuring a new project In this example we'll use VS2012 to create the project basis on the Internet Aplication template and using Entity Framework to manage the User model. Open VS 2012 and create a new project, we'll create a new MVC 4 Web Application and configure the project to use Net Framework 4.5. Type a name for the project and then click “Ok”. In the next dialog we'll choose the “Internet Application” template and use Razor as engine without creating a test project. Click “Ok” to continue. Now we have a new project with the templates necessaries to run a Web Application with the default values. We'll use the current files to continue working. If you have installed Connector/Net you can skip this step, if you don't have installed but you're planning to do it, please install it and continue with the next step. If you're using the No Install version of Connector/Net we'll need to add the references to our project, the assemblies needed are: MySql.Data, MySql.Data.Entities and MySql.Web. Be sure that the assemblies chosen match the Net Framework version used in our project and the MySql.Data.Entities is compatible with EF5 (EF5 is the default added by the project). Now open the “web.config” file, and under the <connectionStrings> node add a connection string that points to a MySql instance. We'll use the following connection configuration: <add name="MyConnection" connectionString="server=localhost;UserId=root;password=pass;database=MySqlSimpleMembership;" providerName="MySql.Data.MySqlClient"/> Under the node <system.web> we'll add the following configuration: <membership defaultProvider="MySqlSimpleMembershipProvider"><providers><clear/><add name="MySqlSimpleMembershipProvider" type="MySql.Web.Security.MySqlSimpleMembershipProvider,MySql.Web,Version=6.9.3.0,Culture=neutral,PublicKeyToken=c5687fc88969c44d" applicationName="MySqlSimpleMembershipTest" description="MySQLdefaultapplication" connectionStringName="MyConnection"  userTableName="UserProfile" userIdColumn="UserId" userNameColumn="UserName" autoGenerateTables="True"/></providers></membership> In the previous configuration the mandatory properties are: connectionStringName, userTableName, userIdColumn, userNameColumn and autoGenerateTables. If the other properties are not provided a default value is set to it but if the mandatory properties are not set a ProviderException will be thrown. The valid properties for the MySqlSimpleMembership are the same used for MySqlMembership plus the mandatory fields. UserTableName: Name of the table where will be stored the user, this table is independent from the schema generated by the provider and can be edited later by the user. UserId: name of the column that will store the id for the records in the userTableName. UserName : name of the column that will store the name/user for the records in the userTableName. The connectionStringName property must match a connection string defined in web.config file. Once the configuration is done in web.config, we need to be sure that our database context for the Users Table point to the right connection string. In our case we just need to update the class UsersContext in the file AcountModel.cs in the Models folder. The file also contains the UserProfile class which match the configuration for our UserTable. Other class that needs to be updated is the SimpleMembershipInitializer in the file InitializeSimpleMembershipAttribute.cs in the Filters folder. In that class we'll see a call to the method “WebSecurity.InitializeDatabaseConnection”, in that call is where we need to update the parameters to match our configuration. If the database that you configure in your connection string doesn't exists, you need to create it empty. Now we're ready to run our web application, press F5 or the Run button in the tool bar. You'll see the following screen: If you go to your database used by the application you'll see some tables created, now we are using SimpleMembership. Now create a user, click on “Register” at the top-right in the web page. Type your user name and password, then click on “Register”. You'll be redirected to the home page and you'll see the name of your user at the top-right page. If you take a look on the tables just created in your database you will find the data about the user you just register. In our case the tables that contains the information are UserProfile and Webpages_Membership.  Configuring OAuth Other option to access your website will be using OAuth, so you can validate an user using an external account like Facebook, Twitter, Google, etc. In this post we'll enable the authentication for Google account in our application. Go to the class AuthConfig.cs in the folder App_Start. In the method “RegisterAuth” uncomment the last line where is the call to the method “OauthWebSecurity.RegisterGoogleClient”. Run the application. Once the application is running click on “Login”. You will see at the right side the option to login using a Google account, click on “Google”.  You will be asked for Google credentials. If your login is successful you'll see a message asking for your approval to give permission to your site to access your information. Click on “Accept”. Now a page to register your user will be shown, click on “Register”. Now your new user is logged in in your application. You can take a look of the user information created in the tables  UserProfile and Webpages_OauthMembership. If you want to use another external option to authenticate users you must enable the client in the same class where we enable the Google authentication, but for others providers is mandatory to register your Application in their site. Once you have register your application they will give you a token/key and the id for your application, that information you're going to use it to register the client. Thanks for reading.

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • How to begin? Windows 8 Development

    - by Dennis Vroegop
    Ok. I convinced you in my last post to do some Win8 development. You want a piece of that cake, or whatever your reasons may be. Good! Welcome to the club! Now let me ask you a question: what are you going to write? Ah. That’s the big one, isn’t it? What indeed? If you have been creating applications for computers before you’re in for quite a shock. The way people perceive apps on a tablet is quite different from what we know as applications. There’s a reason we call them apps instead of applications! Yes, technically they are applications but we don’t call them apps only because it sounds cool. The abbreviated form of the word applications itself is a pointer. Apps are small. Apps are focused. Apps are more lightweight. Apps do one thing but they do that one thing extremely good. In the ‘old’ days we wrote huge systems. We build ecosystems of services, screens, databases and more to create a system that provides value for the user. Think about it: what application do you use most at work? Can you in one sentence describe what it is, or what it does and yet still distinctively describe its purpose? I doubt you can. Let’s have a look at Outlouk. We all know it and we all love or hate it. But what is it? A mail program? No, there’s so much more there: calendar, contacts, RSS feeds and so on. Some call it a ‘collaboration’  application but that’s not really true as well. After all, why should a collaboration application give me my schedule for the day? I think the best way to describe Outlook is “client for Exchange”  although that isn’t accurate either. Anyway: Outlook is a great application but it’s not an ‘app’ and therefor not very suitable for WinRT. Ok. Disclaimer here: yes, you can write big applications for WinRT. Some will. But that’s not what 99.9% of the developers will do. So I am stating here that big applications are not meant for WinRT. If 0.01% of the developers think that this is nonsense then they are welcome to go ahead but for the majority here this is not what we’re talking about. So: Apps are small, lightweight and good at what they do but only at that. If you’re a Phone developer you already know that: Phone apps on any platform fit the description I have above. If you’ve ever worked in a large cooperation before you might have seen one of these before: the Mission Statement. It’s supposed to be a oneliner that sums up what the company is supposed to do. Funny enough: although this doesn’t work for large companies it does work for defining your app. A mission statement for an app describes what it does. If it doesn’t fit in the mission statement then your app is going to get to big and will fail. A statement like this should be in the following style “<your app name> is the best app to <describe single task>” Fill in the blanks, write it and go! Mmm.. not really. There are some things there we need to think about. But the statement is a very, very important one. If you cannot fit your app in that line you’re preparing to fail. Your app will become to big, its purpose will be unclear and it will be hard to use. People won’t download it and those who do will give it a bad rating therefor preventing that huge success you’ve been dreaming about. Stick to the statement! Ok, let’s give it a try: “PlanesAreCool” is the best app to do planespotting in the field. You might have seen these people along runways of airports: taking photographs of airplanes and noting down their numbers and arrival- and departure times. We are going to help them out with our great app! If you look at the statement, can you guess what it does? I bet you can. If you find out it isn’t clear enough of if it’s too broad, refine it. This is probably the most important step in the development of your app so give it enough time! So. We’ve got the statement. Print it out, stick it to the wall and look at it. What does it tell you? If you see this, what do you think the app does? Write that down. Sit down with some friends and talk about it. What do they expect from an app like this? Write that down as well. Brainstorm. Make a list of features. This is mine: Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. Now, I must make it clear that I am not a planespotter nor do I know what one does. So if the above list makes no sense, I apologize. There is a lesson: write apps for stuff you know about…. First of all, let’s look at our statement and then go through the list of features. Remove everything that has nothing to do with that statement! If you end up with an empty list, try again with both steps. Note planes Look up aircraft carriers Add pictures of that plane Look up airfields Notify friends of new spots Look up details of a type of plane Plot a graph with arrival and departure times Share new spots on social media Look up history of a particular aircraft Compare your spots with friends Write down arrival times Write down departure times Write down wind conditions Write down the runway they take Look up weather conditions for next spotting day Invite friends to join you for a day of spotting. That's better. The things I removed could be pretty useful to a plane spotter and could be fun to write. But do they match the statement? I said that the app is for spotting in the field, so “look up airfields” doesn’t belong there: I know where I am so why look it up? And the same goes for inviting friends or looking up the weather conditions for tomorrow. I am at the airfield right now, looking through my binoculars at the planes. I know the weather now and I don’t care about tomorrow. If you feel the items you’ve crossed out are valuable, then why not write another app? One that says “SpotNoter” is the best app for preparing a day of spotting with my friends. That’s a different app! Remember: Win8 apps are small and very good at doing ONE thing, and one thing only! If you have made that list, it’s time to prepare the navigation of your app. The navigation is how users see your app and how they use it. We’ll do that next time!

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Restructuring a large Chrome Extension/WebApp

    - by A.M.K
    I have a very complex Chrome Extension that has gotten too large to maintain in its current format. I'd like to restructure it, but I'm 15 and this is the first webapp or extension of it's type I've built so I have no idea how to do it. TL;DR: I have a large/complex webapp I'd like to restructure and I don't know how to do it. Should I follow my current restructure plan (below)? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? While it isn't relevant to the question, the actual code is on Github and the extension is on the webstore. The basic structure is as follows: index.html <html> <head> <link href="css/style.css" rel="stylesheet" /> <!-- This holds the main app styles --> <link href="css/widgets.css" rel="stylesheet" /> <!-- And this one holds widget styles --> </head> <body class="unloaded"> <!-- Low-level base elements are "hardcoded" here, the unloaded class is used for transitions and is removed on load. i.e: --> <div class="tab-container" tabindex="-1"> <!-- Tab nav --> </div> <!-- Templates for all parts of the application and widgets are stored as elements here. I plan on changing these to <script> elements during the restructure since <template>'s need valid HTML. --> <template id="template.toolbar"> <!-- Template content --> </template> <!-- Templates end --> <!-- Plugins --> <script type="text/javascript" src="js/plugins.js"></script> <!-- This contains the code for all widgets, I plan on moving this online and downloading as necessary soon. --> <script type="text/javascript" src="js/widgets.js"></script> <!-- This contains the main application JS. --> <script type="text/javascript" src="js/script.js"></script> </body> </html> widgets.js (initLog || (window.initLog = [])).push([new Date().getTime(), "A log is kept during page load so performance can be analyzed and errors pinpointed"]); // Widgets are stored in an object and extended (with jQuery, but I'll probably switch to underscore if using Backbone) as necessary var Widgets = { 1: { // Widget ID, this is set here so widgets can be retreived by ID id: 1, // Widget ID again, this is used after the widget object is duplicated and detached size: 3, // Default size, medium in this case order: 1, // Order shown in "store" name: "Weather", // Widget name interval: 300000, // Refresh interval nicename: "weather", // HTML and JS safe widget name sizes: ["tiny", "small", "medium"], // Available widget sizes desc: "Short widget description", settings: [ { // Widget setting specifications stored as an array of objects. These are used to dynamically generate widget setting popups. type: "list", nicename: "location", label: "Location(s)", placeholder: "Enter a location and press Enter" } ], config: { // Widget settings as stored in the tabs object (see script.js for storage information) size: "medium", location: ["San Francisco, CA"] }, data: {}, // Cached widget data stored locally, this lets it work offline customFunc: function(cb) {}, // Widgets can optionally define custom functions in any part of their object refresh: function() {}, // This fetches data from the web and caches it locally in data, then calls render. It gets called after the page is loaded for faster loads render: function() {} // This renders the widget only using information from data, it's called on page load. } }; script.js (initLog || (window.initLog = [])).push([new Date().getTime(), "These are also at the end of every file"]); // Plugins, extends and globals go here. i.e. Number.prototype.pad = .... var iChrome = function(refresh) { // The main iChrome init, called with refresh when refreshing to not re-run libs iChrome.Status.log("Starting page generation"); // From now on iChrome.Status.log is defined, it's used in place of the initLog iChrome.CSS(); // Dynamically generate CSS based on settings iChrome.Tabs(); // This takes the tabs stored in the storage (see fetching below) and renders all columns and widgets as necessary iChrome.Status.log("Tabs rendered"); // These will be omitted further along in this excerpt, but they're used everywhere // Checks for justInstalled => show getting started are run here /* The main init runs the bare minimum required to display the page, this sets all non-visible or instantly need things (such as widget dragging) on a timeout */ iChrome.deferredTimeout = setTimeout(function() { iChrome.deferred(refresh); // Pass refresh along, see above }, 200); }; iChrome.deferred = function(refresh) {}; // This calls modules one after the next in the appropriate order to finish rendering the page iChrome.Search = function() {}; // Modules have a base init function and are camel-cased and capitalized iChrome.Search.submit = function(val) {}; // Methods within modules are camel-cased and not capitalized /* Extension storage is async and fetched at the beginning of plugins.js, it's then stored in a variable that iChrome.Storage processes. The fetcher checks to see if processStorage is defined, if it is it gets called, otherwise settings are left in iChromeConfig */ var processStorage = function() { iChrome.Storage(function() { iChrome.Templates(); // Templates are read from their elements and held in a cache iChrome(); // Init is called }); }; if (typeof iChromeConfig == "object") { processStorage(); } Objectives of the restructure Memory usage: Chrome apparently has a memory leak in extensions, they're trying to fix it but memory still keeps on getting increased every time the page is loaded. The app also uses a lot on its own. Code readability: At this point I can't follow what's being called in the code. While rewriting the code I plan on properly commenting everything. Module interdependence: Right now modules call each other a lot, AFAIK that's not good at all since any change you make to one module could affect countless others. Fault tolerance: There's very little fault tolerance or error handling right now. If a widget is causing the rest of the page to stop rendering the user should at least be able to remove it. Speed is currently not an issue and I'd like to keep it that way. How I think I should do it The restructure should be done using Backbone.js and events that call modules (i.e. on storage.loaded = init). Modules should each go in their own file, I'm thinking there should be a set of core files that all modules can rely on and call directly and everything else should be event based. Widget structure should be kept largely the same, but maybe they should also be split into their own files. AFAIK you can't load all templates in a folder, therefore they need to stay inline. Grunt should be used to merge all modules, plugins and widgets into one file. Templates should also all be precompiled. Question: Should I follow my current restructure plan? Does that sound like a good starting point, or is there a different approach that I'm missing? Should I not do any of the things I listed? Do applications written with Backbone tend to be more intensive (memory and speed) than ones written in Vanilla JS? Also, can I expect to improve this with a proper restructure or is my current code about as good as can be expected?

    Read the article

  • LINQ – SequenceEqual() method

    - by nmarun
    I have been looking at LINQ extension methods and have blogged about what I learned from them in my blog space. Next in line is the SequenceEqual() method. Here’s the description about this method: “Determines whether two sequences are equal by comparing the elements by using the default equality comparer for their type.” Let’s play with some code: 1: int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 2: // int[] numbersCopy = numbers; 3: int[] numbersCopy = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; 4:  5: Console.WriteLine(numbers.SequenceEqual(numbersCopy)); This gives an output of ‘True’ – basically compares each of the elements in the two arrays and returns true in this case. The result is same even if you uncomment line 2 and comment line 3 (I didn’t need to say that now did I?). So then what happens for custom types? For this, I created a Product class with the following definition: 1: class Product 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8: } 9:  10: public enum Status 11: { 12: Active = 1, 13: InActive = 2, 14: OffShelf = 3, 15: } In my calling code, I’m just adding a few product items: 1: private static List<Product> GetProducts() 2: { 3: return new List<Product> 4: { 5: new Product 6: { 7: ProductId = 1, 8: Name = "Laptop", 9: Category = "Computer", 10: MfgDate = new DateTime(2003, 4, 3), 11: Status = Status.Active, 12: }, 13: new Product 14: { 15: ProductId = 2, 16: Name = "Compact Disc", 17: Category = "Water Sport", 18: MfgDate = new DateTime(2009, 12, 3), 19: Status = Status.InActive, 20: }, 21: new Product 22: { 23: ProductId = 3, 24: Name = "Floppy", 25: Category = "Computer", 26: MfgDate = new DateTime(1993, 3, 7), 27: Status = Status.OffShelf, 28: }, 29: }; 30: } Now for the actual check: 1: List<Product> products1 = GetProducts(); 2: List<Product> products2 = GetProducts(); 3:  4: Console.WriteLine(products1.SequenceEqual(products2)); This one returns ‘False’ and the reason is simple – this one checks for reference equality and the products in the both the lists get different ‘memory addresses’ (sounds like I’m talking in ‘C’). In order to modify this behavior and return a ‘True’ result, we need to modify the Product class as follows: 1: class Product : IEquatable<Product> 2: { 3: public int ProductId { get; set; } 4: public string Name { get; set; } 5: public string Category { get; set; } 6: public DateTime MfgDate { get; set; } 7: public Status Status { get; set; } 8:  9: public override bool Equals(object obj) 10: { 11: return Equals(obj as Product); 12: } 13:  14: public bool Equals(Product other) 15: { 16: //Check whether the compared object is null. 17: if (ReferenceEquals(other, null)) return false; 18:  19: //Check whether the compared object references the same data. 20: if (ReferenceEquals(this, other)) return true; 21:  22: //Check whether the products' properties are equal. 23: return ProductId.Equals(other.ProductId) 24: && Name.Equals(other.Name) 25: && Category.Equals(other.Category) 26: && MfgDate.Equals(other.MfgDate) 27: && Status.Equals(other.Status); 28: } 29:  30: // If Equals() returns true for a pair of objects 31: // then GetHashCode() must return the same value for these objects. 32: // read why in the following articles: 33: // http://geekswithblogs.net/akraus1/archive/2010/02/28/138234.aspx 34: // http://stackoverflow.com/questions/371328/why-is-it-important-to-override-gethashcode-when-equals-method-is-overriden-in-c 35: public override int GetHashCode() 36: { 37: //Get hash code for the ProductId field. 38: int hashProductId = ProductId.GetHashCode(); 39:  40: //Get hash code for the Name field if it is not null. 41: int hashName = Name == null ? 0 : Name.GetHashCode(); 42:  43: //Get hash code for the ProductId field. 44: int hashCategory = Category.GetHashCode(); 45:  46: //Get hash code for the ProductId field. 47: int hashMfgDate = MfgDate.GetHashCode(); 48:  49: //Get hash code for the ProductId field. 50: int hashStatus = Status.GetHashCode(); 51: //Calculate the hash code for the product. 52: return hashProductId ^ hashName ^ hashCategory & hashMfgDate & hashStatus; 53: } 54:  55: public static bool operator ==(Product a, Product b) 56: { 57: // Enable a == b for null references to return the right value 58: if (ReferenceEquals(a, b)) 59: { 60: return true; 61: } 62: // If one is null and the other not. Remember a==null will lead to Stackoverflow! 63: if (ReferenceEquals(a, null)) 64: { 65: return false; 66: } 67: return a.Equals((object)b); 68: } 69:  70: public static bool operator !=(Product a, Product b) 71: { 72: return !(a == b); 73: } 74: } Now THAT kinda looks overwhelming. But lets take one simple step at a time. Ok first thing you’ve noticed is that the class implements IEquatable<Product> interface – the key step towards achieving our goal. This interface provides us with an ‘Equals’ method to perform the test for equality with another Product object, in this case. This method is called in the following situations: when you do a ProductInstance.Equals(AnotherProductInstance) and when you perform actions like Contains<T>, IndexOf() or Remove() on your collection Coming to the Equals method defined line 14 onwards. The two ‘if’ blocks check for null and referential equality using the ReferenceEquals() method defined in the Object class. Line 23 is where I’m doing the actual check on the properties of the Product instances. This is what returns the ‘True’ for us when we run the application. I have also overridden the Object.Equals() method which calls the Equals() method of the interface. One thing to remember is that anytime you override the Equals() method, its’ a good practice to override the GetHashCode() method and overload the ‘==’ and the ‘!=’ operators. For detailed information on this, please read this and this. Since we’ve overloaded the operators as well, we get ‘True’ when we do actions like: 1: Console.WriteLine(products1.Contains(products2[0])); 2: Console.WriteLine(products1[0] == products2[0]); This completes the full circle on the SequenceEqual() method. See the code used in the article here.

    Read the article

  • Rendering Flickr Cats Via Backbone.js

    - by Geertjan
    Create a JavaScript file and refer to it inside an HTML file. Then put this into the JavaScript file: (function($) {     var CatCollection = Backbone.Collection.extend({         url: 'http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?',         parse: function(response) {             return response.items;         }     });     var CatView = Backbone.View.extend({         el: $('body'),         initialize: function() {             _.bindAll(this, 'render');             carCollectionInstance.fetch({                 success: function(response, xhr) {                     catView.render();                 }             });         },         render: function() {             $(this.el).append("<ul></ul>");             for (var i = 0; i < carCollectionInstance.length; i++) {                 $('ul', this.el).append("<li>" + i + carCollectionInstance.models[i].get("description") + "</li>");             }         }     });     var carCollectionInstance = new CatCollection();     var catView = new CatView(); })(jQuery); Apologies for any errors or misused idioms. It's my second day with Backbone.js, in fact, my second day with JavaScript. I haven't seen anywhere online so far where an example such as the above is found, though plenty that do kind of or pieces of the above, or explain in text, without an actual full example. The next step, and the only reason for the above experiment, is to create some JPA entities and expose them via RESTful webservices created on EJB methods, for consumption into an HTML5 application via a Backbone.js script very similar to the above. 

    Read the article

  • ?11.2RAC??????????????

    - by JaneZhang(???)
           ?????,???????????????,???dbca???????,???????????dbca,?????????11.2???????,???????,??dbca??????????????????,????????????????     ????11.2???????RACDB2???,?????RACDB1? ?????rac1,????rac2?     ?11.2?,?????grid?????GI,??oracle????????,????????oracle?????? 1. ??????????????????,?????,???????????:audit_file_dest, background_dump_dest, user_dump_dest ?core_dump_dest????audit_file_dest=/u01/app/oracle/admin/RACDB/adump,?????????,?????????:ORA-09925: Unable to create audit trail fileLinux-x86_64 Error: 2: No such file or directoryAdditional information: 99252. ????????????????????????????:SQL> alter system set instance_number=2 scope=spfile sid='RACDB2';SQL> alter system set thread=2 scope=spfile sid='RACDB2';SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='RACDB2';SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.0.2.122)(PORT=1521))))' sid='RACDB2'; <=====192.0.2.122???2?VIP 3. ???????DB?$ORACLE_HOME/dbs/init<sid>.ora ?????DB?$ORACLE_HOME/dbs/init<sid>.ora,??????????????init<sid>.ora ????,????spfile???:=======================SPFILE='+DATA/racdb/spfileracdb.ora'??:[oracle@rac1 ~]$ scp $ORACLE_HOME/dbs/initRACDB1.ora rac2:$ORACLE_HOME/dbs/initRACDB2.ora <===????????24.  ??????/etc/oratab,????????????:RACDB2:/u01/app/oracle/product/11.2.0/dbhome_1:N       5.  ???????????: DB?$ORACLE_HOME/dbs/ora<sid>.pwd ????DB?$ORACLE_HOME/dbs/ora<sid>.pwd,??????????????:[oracle@rac1 dbs]$ scp $ORACLE_HOME/dbs/orapwRACDB1 rac2:$ORACLE_HOME/dbs/orapwRACDB2 <==?????26.  ?????????????,????????UNDO TABLESPACE?(??????dbca?????,???????undo tablespace????,?????????)??:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '/dev/….' SIZE 4096M ;???????:SQL>CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE '+DATA' SIZE 4096M ;7.  ?????????????,????????redo thread?redo log:??:SQL> alter database add logfile thread 2      group 3 ('/dev/...', '/dev/...') size 1024M,     group 4 ('/dev/...','dev/...') size 1024M;???????:SQL> alter database add logfile thread 2     group 3 ('+DATA','+RECO') size 1024M,     group 4 ('+DATA','+RECO') size 1024M;SQL> alter database enable thread 2; <==????thread8.  ??????????,?????????????:[oracle@rac2 admin]$su - oracle[oracle@rac2 admin]$export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1[oracle@rac2 admin]$export ORACLE_SID=RACDB2[oracle@rac2 admin]$ sqlplus / as sysdbaSQL> startup <==??????,???????2????????????9. ?????OCR???GI??,?????????????:$srvctl add instance -d <database name> -i <new instance name> -n <new node name>Example of srvctl add instance command:============================[oracle@rac2 ~]$ srvctl add instance -d racdb -i RACDB2 -n rac2  <==????????,????ps -ef|grep smon???[oracle@rac2 dbs]$ ps -ef|grep smonroot      3453     1  1 Jun12 ?        04:03:05 /u01/app/11.2.0/grid/bin/osysmond.bingrid      3727     1  0 Jun12 ?        00:00:19 asm_smon_+ASM2oracle    5343  4543  0 14:06 pts/1    00:00:00 grep smonoracle   28736     1  0 Jun25 ?        00:00:03 ora_smon_RACDB2 <========??????10. ???????:$su - grid[grid@rac2 ~]$ crsctl stat res -t...ora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        OFFLINE OFFLINE             rac2????,??????offline,????????????sqlplus??????sqlplus??????,???srvctl??:[grid@rac2 ~]$ su  - oraclePassword: [oracle@rac2 ~]$ sqlplus / as sysdbaSQL> shutdown immediate;Database closed.Database dismounted.ORACLE instance shut down.SQL> exit[oracle@rac2 ~]$ srvctl start instance -d racdb -i RACDB2[oracle@rac2 ~]$ su - gridPassword: [grid@rac2 ~]$ crsctl stat res -tora.racdb.db      1        ONLINE  ONLINE       rac1                     Open                      2        ONLINE  ONLINE       rac2                     Open                11. ?????????:[oracle@rac2 ~]$ crsctl stat res ora.racdb.db -pNAME=ora.racdb.dbTYPE=ora.database.typeACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--ACTION_FAILURE_TEMPLATE=ACTION_SCRIPT=ACTIVE_PLACEMENT=1AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%AUTO_START=restoreCARDINALITY=2CHECK_INTERVAL=1CHECK_TIMEOUT=30CLUSTER_DATABASE=trueDATABASE_TYPE=RACDB_UNIQUE_NAME=RACDBDEFAULT_TEMPLATE=PROPERTY(RESOURCE_CLASS=database) PROPERTY(DB_UNIQUE_NAME= CONCAT(PARSE(%NAME%, ., 2), %USR_ORA_DOMAIN%, .)) ELEMENT(INSTANCE_NAME= %GEN_USR_ORA_INST_NAME%) ELEMENT(DATABASE_TYPE= %DATABASE_TYPE%)DEGREE=1DESCRIPTION=Oracle Database resourceENABLED=1FAILOVER_DELAY=0FAILURE_INTERVAL=60FAILURE_THRESHOLD=1GEN_AUDIT_FILE_DEST=/u01/app/oracle/admin/RACDB/adumpGEN_START_OPTIONS=GEN_START_OPTIONS@SERVERNAME(rac1)=openGEN_START_OPTIONS@SERVERNAME(rac2)=openGEN_USR_ORA_INST_NAME=GEN_USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1HOSTING_MEMBERS=INSTANCE_FAILOVER=0LOAD=1LOGGING_LEVEL=1MANAGEMENT_POLICY=AUTOMATICNLS_LANG=NOT_RESTARTING_TEMPLATE=OFFLINE_CHECK_INTERVAL=0ONLINE_RELOCATION_TIMEOUT=0ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1ORACLE_HOME_OLD=PLACEMENT=restrictedPROFILE_CHANGE_TEMPLATE=RESTART_ATTEMPTS=2ROLE=PRIMARYSCRIPT_TIMEOUT=60SERVER_POOLS=ora.RACDBSPFILE=+DATA/RACDB/spfileRACDB.oraSTART_DEPENDENCIES=hard(ora.DATA.dg,ora.RECO.dg) weak(type:ora.listener.type,global:type:ora.scan_listener.type,uniform:ora.ons,global:ora.gns) pullup(ora.DATA.dg,ora.RECO.dg)START_TIMEOUT=600STATE_CHANGE_TEMPLATE=STOP_DEPENDENCIES=hard(intermediate:ora.asm,shutdown:ora.DATA.dg,shutdown:ora.RECO.dg)STOP_TIMEOUT=600TYPE_VERSION=3.2UPTIME_THRESHOLD=1hUSR_ORA_DB_NAME=RACDBUSR_ORA_DOMAIN=USR_ORA_ENV=USR_ORA_FLAGS=USR_ORA_INST_NAME=USR_ORA_INST_NAME@SERVERNAME(rac1)=RACDB1USR_ORA_INST_NAME@SERVERNAME(rac2)=RACDB2USR_ORA_OPEN_MODE=openUSR_ORA_OPI=falseUSR_ORA_STOP_MODE=immediateVERSION=11.2.0.3.0???11.2,?OCR???database??,??????,???????????database???????database???????,??????,???????????????ASM????????????  ?:dbca ???????????:????????oracle????dbca:su - oracledbca?? RAC database?? Instance Management?? add an instance???active rac database??????? ??undo?redo??

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

  • Apache Tomcat Ant undeploy task error using

    - by Devil Jin
    I am using ant 1.7 to deploy and undeploy applications in tomcat //Snippet from my build.xml <target name="deploy" depends="war" description="Install application to the servlet containor"> <deploy url="${tomcat.manager.url}" username="${manager.user}" password="${manager.passwd}" path="/${tomcat.ctxpath}" war="${war.local}" /> </target> <target name="undeploy" description="Removes Web Application from path"> <undeploy url="${tomcat.manager.url}" username="${manager.user}" password="${manager.passwd}" path="/${tomcat.ctxpath}" /> </target> The deploy task works perfectly fine but the undeploy task gives an html output for the undeploy task prefixed with [undeploy] although the application is undeployed successfully The html message also contains the success message 'OK - Undeployed application at context path /MyApplication' OUTPUT: [undeploy] <html> [undeploy] <head> [undeploy] <style> [undeploy] H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tah oma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:whit e;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background :white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;} table { [undeploy] width: 100%; [undeploy] } [undeploy] td.page-title { [undeploy] text-align: center; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: white; [undeploy] color: black; [undeploy] } [undeploy] td.title { [undeploy] text-align: left; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-style:italic; [undeploy] font-weight: bold; [undeploy] background: #D2A41C; [undeploy] } [undeploy] td.header-left { [undeploy] text-align: left; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: #FFDC75; [undeploy] } [undeploy] td.header-center { [undeploy] text-align: center; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: #FFDC75; [undeploy] } [undeploy] td.row-left { [undeploy] text-align: left; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] td.row-center { [undeploy] text-align: center; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] td.row-right { [undeploy] text-align: right; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] TH { [undeploy] text-align: center; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: #FFDC75; [undeploy] } [undeploy] TD { [undeploy] text-align: center; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] </style> [undeploy] <title>/manager</title> [undeploy] </head> [undeploy] <body bgcolor="#FFFFFF"> [undeploy] <table cellspacing="4" width="100%" border="0"> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <a href="http://www.apache.org/"> [undeploy] <img border="0" alt="The Apache Software Foundation" align="left" [undeploy] src="/manager/images/asf-logo.gif"> [undeploy] </a> [undeploy] <a href="http://tomcat.apache.org/"> [undeploy] <img border="0" alt="The Tomcat Servlet/JSP Container" [undeploy] align="right" src="/manager/images/tomcat.gif"> [undeploy] </a> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <hr size="1" noshade="noshade"> [undeploy] <table cellspacing="4" width="100%" border="0"> [undeploy] <tr> [undeploy] <td class="page-title" bordercolor="#000000" align="left" nowrap> [undeploy] <font size="+2">Tomcat Web Application Manager</font> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-left" width="10%"><small><strong>Message:</strong></small>&nbsp;</td> [undeploy] <td class="row-left"><pre>OK - Undeployed application at context path /MyApplication [undeploy] </pre></td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="4" class="title">Manager</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left"><a href="/manager/html/list">List Applications</a></td> [undeploy] <td class="row-center"><a href="/manager/../docs/html-manager-howto.html">HTML Manager Help</a></td> [undeploy] <td class="row-center"><a href="/manager/../docs/manager-howto.html">Manager Help</a></td> [undeploy] <td class="row-right"><a href="/manager/status">Server Status</a></td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="5" class="title">Applications</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="header-left"><small>Path</small></td> [undeploy] <td class="header-left"><small>Display Name</small></td> [undeploy] <td class="header-center"><small>Running</small></td> [undeploy] <td class="header-center"><small>Sessions</small></td> [undeploy] <td class="header-left"><small>Commands</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small><a href="/">/</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small>Welcome to Tomcat</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager/html/sessions?path=/" target="_bla nk">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/" onclick="return(confirm('Are you sure?'))">Stop</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/reload?path=/" onclick="return(confirm('Are you sure?'))">Reload</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/" onclick="return(confirm('Are you sure?'))">Undeploy</a>&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <form method="POST" action="/manager/html/expire?path=/"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small><a href="/docs">/docs</a></small></td> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small>Tomcat Documentation</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small><a href="/manager/html/sessions?path=/docs" target=" _blank">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/docs" onclick="return(confirm('Are you sure?'))">Stop</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/reload?path=/docs" onclick="return(confirm('Are you sure?'))">Reload</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/docs" onclick="return(confirm('Are you sure?'))">Undeploy</a>&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <form method="POST" action="/manager/html/expire?path=/docs"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small><a href="/examples">/examples</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small>Servlet and JSP Examples</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager/html/sessions?path=/examples" targ et="_blank">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/examples" onclick="return(confirm('Are you sure?'))">Stop</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/reload?path=/examples" onclick="return(confirm('Are you sure?'))">Reload</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/examples" onclick="return(confirm('Are you sure?'))">Undeploy</a>&n bsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <form method="POST" action="/manager/html/expire?path=/examples"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small><a href="/host%2Dmanager">/host-manager</a></small></t d> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small>Tomcat Manager Application</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small><a href="/manager/html/sessions?path=/host%2Dmanager " target="_blank">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/host%2Dmanager" onclick="return(confirm('Are you sure?'))">Stop</a>&nbs p; [undeploy] &nbsp;<a href="/manager/html/reload?path=/host%2Dmanager" onclick="return(confirm('Are you sure?'))">Reload</a> &nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/host%2Dmanager" onclick="return(confirm('Are you sure?'))">Undeploy </a>&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <form method="POST" action="/manager/html/expire?path=/host%2Dmanager"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager">/manager</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small>Tomcat Manager Application</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager/html/sessions?path=/manager" targe t="_blank">3</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;Stop&nbsp; [undeploy] &nbsp;Reload&nbsp; [undeploy] &nbsp;Undeploy&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <form method="POST" action="/manager/html/expire?path=/manager"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="2" class="title">Deploy</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2" class="header-left"><small>Deploy directory or WAR file located on server</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <form method="get" action="/manager/html/deploy"> [undeploy] <table cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>Context Path (required):</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="text" name="deployPath" size="20"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>XML Configuration file URL:</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="text" name="deployConfig" size="20"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>WAR or Directory URL:</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="text" name="deployWar" size="40"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] &nbsp; [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="submit" value="Deploy"> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2" class="header-left"><small>WAR file to deploy</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <form action="/manager/html/upload" method="post" enctype="multipart/form-data"> [undeploy] <table cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>Select WAR file to upload</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="file" name="deployWar" size="40"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] &nbsp; [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="submit" value="Deploy"> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] </form> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="2" class="title">Diagnostics</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2" class="header-left"><small>Check to see if a web application has caused a memory leak on stop, r eload or undeploy</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <form method="post" action="/manager/html/findleaks"> [undeploy] <table cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-left"> [undeploy] <input type="submit" value="Find leaks"> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <small>This diagnostic check will trigger a full garbage collection. Use it with extreme caution on production systems.</small> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <br><table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="6" class="title">Server Information</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="header-center"><small>Tomcat Version</small></td> [undeploy] <td class="header-center"><small>JVM Version</small></td> [undeploy] <td class="header-center"><small>JVM Vendor</small></td> [undeploy] <td class="header-center"><small>OS Name</small></td> [undeploy] <td class="header-center"><small>OS Version</small></td> [undeploy] <td class="header-center"><small>OS Architecture</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-center"><small>Apache Tomcat/6.0.26</small></td> [undeploy] <td class="row-center"><small>1.5.0_09-b01</small></td> [undeploy] <td class="row-center"><small>Sun Microsystems Inc.</small></td> [undeploy] <td class="row-center"><small>Windows XP</small></td> [undeploy] <td class="row-center"><small>5.1</small></td> [undeploy] <td class="row-center"><small>x86</small></td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <hr size="1" noshade="noshade"> [undeploy] <center><font size="-1" color="#525D76"> [undeploy] <em>Copyright &copy; 1999-2010, Apache Software Foundation</em></font></center> [undeploy] </body> [undeploy] </html>

    Read the article

  • ant undeploy task error

    - by Devil Jin
    I am using ant 1.7 to deploy and undeploy applications in tomcat //Snippet from my build.xml <target name="deploy" depends="war" description="Install application to the servlet containor"> <deploy url="${tomcat.manager.url}" username="${manager.user}" password="${manager.passwd}" path="/${tomcat.ctxpath}" war="${war.local}" /> </target> <target name="undeploy" description="Removes Web Application from path"> <undeploy url="${tomcat.manager.url}" username="${manager.user}" password="${manager.passwd}" path="/${tomcat.ctxpath}" /> </target> The deploy task works perfectly fine but the undeploy task gives an html output for the undeploy task prefixed with [undeploy] although the application is undeployed successfully The html message also contains the success message 'OK - Undeployed application at context path /MyApplication' OUTPUT: [undeploy] <html> [undeploy] <head> [undeploy] <style> [undeploy] H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tah oma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:whit e;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background :white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;} table { [undeploy] width: 100%; [undeploy] } [undeploy] td.page-title { [undeploy] text-align: center; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: white; [undeploy] color: black; [undeploy] } [undeploy] td.title { [undeploy] text-align: left; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-style:italic; [undeploy] font-weight: bold; [undeploy] background: #D2A41C; [undeploy] } [undeploy] td.header-left { [undeploy] text-align: left; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: #FFDC75; [undeploy] } [undeploy] td.header-center { [undeploy] text-align: center; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: #FFDC75; [undeploy] } [undeploy] td.row-left { [undeploy] text-align: left; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] td.row-center { [undeploy] text-align: center; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] td.row-right { [undeploy] text-align: right; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] TH { [undeploy] text-align: center; [undeploy] vertical-align: top; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] font-weight: bold; [undeploy] background: #FFDC75; [undeploy] } [undeploy] TD { [undeploy] text-align: center; [undeploy] vertical-align: middle; [undeploy] font-family:sans-serif,Tahoma,Arial; [undeploy] color: black; [undeploy] } [undeploy] </style> [undeploy] <title>/manager</title> [undeploy] </head> [undeploy] <body bgcolor="#FFFFFF"> [undeploy] <table cellspacing="4" width="100%" border="0"> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <a href="http://www.apache.org/"> [undeploy] <img border="0" alt="The Apache Software Foundation" align="left" [undeploy] src="/manager/images/asf-logo.gif"> [undeploy] </a> [undeploy] <a href="http://tomcat.apache.org/"> [undeploy] <img border="0" alt="The Tomcat Servlet/JSP Container" [undeploy] align="right" src="/manager/images/tomcat.gif"> [undeploy] </a> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <hr size="1" noshade="noshade"> [undeploy] <table cellspacing="4" width="100%" border="0"> [undeploy] <tr> [undeploy] <td class="page-title" bordercolor="#000000" align="left" nowrap> [undeploy] <font size="+2">Tomcat Web Application Manager</font> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-left" width="10%"><small><strong>Message:</strong></small>&nbsp;</td> [undeploy] <td class="row-left"><pre>OK - Undeployed application at context path /MyApplication [undeploy] </pre></td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="4" class="title">Manager</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left"><a href="/manager/html/list">List Applications</a></td> [undeploy] <td class="row-center"><a href="/manager/../docs/html-manager-howto.html">HTML Manager Help</a></td> [undeploy] <td class="row-center"><a href="/manager/../docs/manager-howto.html">Manager Help</a></td> [undeploy] <td class="row-right"><a href="/manager/status">Server Status</a></td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="5" class="title">Applications</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="header-left"><small>Path</small></td> [undeploy] <td class="header-left"><small>Display Name</small></td> [undeploy] <td class="header-center"><small>Running</small></td> [undeploy] <td class="header-center"><small>Sessions</small></td> [undeploy] <td class="header-left"><small>Commands</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small><a href="/">/</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small>Welcome to Tomcat</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager/html/sessions?path=/" target="_bla nk">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/" onclick="return(confirm('Are you sure?'))">Stop</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/reload?path=/" onclick="return(confirm('Are you sure?'))">Reload</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/" onclick="return(confirm('Are you sure?'))">Undeploy</a>&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <form method="POST" action="/manager/html/expire?path=/"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small><a href="/docs">/docs</a></small></td> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small>Tomcat Documentation</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small><a href="/manager/html/sessions?path=/docs" target=" _blank">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/docs" onclick="return(confirm('Are you sure?'))">Stop</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/reload?path=/docs" onclick="return(confirm('Are you sure?'))">Reload</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/docs" onclick="return(confirm('Are you sure?'))">Undeploy</a>&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <form method="POST" action="/manager/html/expire?path=/docs"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small><a href="/examples">/examples</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small>Servlet and JSP Examples</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager/html/sessions?path=/examples" targ et="_blank">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/examples" onclick="return(confirm('Are you sure?'))">Stop</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/reload?path=/examples" onclick="return(confirm('Are you sure?'))">Reload</a>&nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/examples" onclick="return(confirm('Are you sure?'))">Undeploy</a>&n bsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <form method="POST" action="/manager/html/expire?path=/examples"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small><a href="/host%2Dmanager">/host-manager</a></small></t d> [undeploy] <td class="row-left" bgcolor="#C3F3C3" rowspan="2"><small>Tomcat Manager Application</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#C3F3C3" rowspan="2"><small><a href="/manager/html/sessions?path=/host%2Dmanager " target="_blank">0</a></small></td> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;<a href="/manager/html/stop?path=/host%2Dmanager" onclick="return(confirm('Are you sure?'))">Stop</a>&nbs p; [undeploy] &nbsp;<a href="/manager/html/reload?path=/host%2Dmanager" onclick="return(confirm('Are you sure?'))">Reload</a> &nbsp; [undeploy] &nbsp;<a href="/manager/html/undeploy?path=/host%2Dmanager" onclick="return(confirm('Are you sure?'))">Undeploy </a>&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#C3F3C3"> [undeploy] <form method="POST" action="/manager/html/expire?path=/host%2Dmanager"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager">/manager</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF" rowspan="2"><small>Tomcat Manager Application</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small>true</small></td> [undeploy] <td class="row-center" bgcolor="#FFFFFF" rowspan="2"><small><a href="/manager/html/sessions?path=/manager" targe t="_blank">3</a></small></td> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <small> [undeploy] &nbsp;Start&nbsp; [undeploy] &nbsp;Stop&nbsp; [undeploy] &nbsp;Reload&nbsp; [undeploy] &nbsp;Undeploy&nbsp; [undeploy] </small> [undeploy] </td> [undeploy] </tr><tr> [undeploy] <td class="row-left" bgcolor="#FFFFFF"> [undeploy] <form method="POST" action="/manager/html/expire?path=/manager"> [undeploy] <small> [undeploy] &nbsp;<input type="submit" value="Expire sessions">&nbsp;with idle &ge;&nbsp;<input type="text" name="idle" siz e="5" value="30">&nbsp;minutes&nbsp; [undeploy] </small> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="2" class="title">Deploy</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2" class="header-left"><small>Deploy directory or WAR file located on server</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <form method="get" action="/manager/html/deploy"> [undeploy] <table cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>Context Path (required):</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="text" name="deployPath" size="20"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>XML Configuration file URL:</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="text" name="deployConfig" size="20"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>WAR or Directory URL:</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="text" name="deployWar" size="40"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] &nbsp; [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="submit" value="Deploy"> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2" class="header-left"><small>WAR file to deploy</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <form action="/manager/html/upload" method="post" enctype="multipart/form-data"> [undeploy] <table cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] <small>Select WAR file to upload</small> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="file" name="deployWar" size="40"> [undeploy] </td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-right"> [undeploy] &nbsp; [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <input type="submit" value="Deploy"> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] </form> [undeploy] </table> [undeploy] <br> [undeploy] <table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="2" class="title">Diagnostics</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2" class="header-left"><small>Check to see if a web application has caused a memory leak on stop, r eload or undeploy</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td colspan="2"> [undeploy] <form method="post" action="/manager/html/findleaks"> [undeploy] <table cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td class="row-left"> [undeploy] <input type="submit" value="Find leaks"> [undeploy] </td> [undeploy] <td class="row-left"> [undeploy] <small>This diagnostic check will trigger a full garbage collection. Use it with extreme caution on production systems.</small> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] </form> [undeploy] </td> [undeploy] </tr> [undeploy] </table> [undeploy] <br><table border="1" cellspacing="0" cellpadding="3"> [undeploy] <tr> [undeploy] <td colspan="6" class="title">Server Information</td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="header-center"><small>Tomcat Version</small></td> [undeploy] <td class="header-center"><small>JVM Version</small></td> [undeploy] <td class="header-center"><small>JVM Vendor</small></td> [undeploy] <td class="header-center"><small>OS Name</small></td> [undeploy] <td class="header-center"><small>OS Version</small></td> [undeploy] <td class="header-center"><small>OS Architecture</small></td> [undeploy] </tr> [undeploy] <tr> [undeploy] <td class="row-center"><small>Apache Tomcat/6.0.26</small></td> [undeploy] <td class="row-center"><small>1.5.0_09-b01</small></td> [undeploy] <td class="row-center"><small>Sun Microsystems Inc.</small></td> [undeploy] <td class="row-center"><small>Windows XP</small></td> [undeploy] <td class="row-center"><small>5.1</small></td> [undeploy] <td class="row-center"><small>x86</small></td> [undeploy] </tr> [undeploy] </table> [undeploy] <br> [undeploy] <hr size="1" noshade="noshade"> [undeploy] <center><font size="-1" color="#525D76"> [undeploy] <em>Copyright &copy; 1999-2010, Apache Software Foundation</em></font></center> [undeploy] </body> [undeploy] </html>

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >