Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 175/254 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How to Deal with a Difficult Boss?

    - by Anonymous
    I have some problems with my boss, it's quite a long story :) About one year ago, I'm working as team leader of project X. Everything work fine until one of my fellow (staff) flame me that I have problem with ALL member in our team, that guy also flame me to other staff that I report them with a poor performance. My boss call me and blame me without ask a single question. I try to explain everything to my boss but she doesn't listen to me. One month later, we have a meeting. This is only team leader's meeting, my boss talk about this problem with other team leader. There are two person who have worked with this guy, they all say "This guy cannot trust". That guy had do same thing same problem with his former team leader. Finally, everything's clear and I think I gain some trust from her. I can say that I'm the best team leader in her hand, as only project that archive more than 120% profit. Then I move to new project, this is bigger project and I can manage it quite good. But I have a problem again. One of my staff always leave and does not follow our company rule, I call him to talk and tell him that you cannot do this because that's not allow in our company. He also changed working time record file of himself, then I call him to warn again. This time he ask me to move to another project so I go to talk to my boss. She come to my building when I'm not there (other staff call me) and talk with that guy (who have problem with me); I think she still not trust me. And AGAIN, she believe what that guy said and I got blamed. I want to know how can I deal with this kind of boss, or is it better to find a new job, or any other suggestion about this problem? Thank you :) Additional information: Even my job title is "Team Leader" but it's my responsibility to manage staff working time and their behavior. This responsible is my company's rule.

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Process Rules!

    - by Ajay Khanna
    One of the key components of a process is “Business Rule”. Business rule takes many forms inside your process definition and in a way is a manifestation of your company’s business policy. Business rules inside the process are used for policy enforcement, governance, decision management, operations efficiency etc. Following are some basic types of rules that can be a part of your process. 1. Process conditions:  These are defined as the process gateways that determine a path process will take depending on the process parameters. For Example, if discount >10% go to approval path : if discount < 10% auto-approve order. 2. Data rules: These business rules are defined as facts in decision table or knowledge base. The process captures all required parameters and submits those to RETE based rules engine. Rules engine processes the data and returns the result back. For example, rules determining your insurance eligibility. 3. Event rules: Here the system is monitoring the various events and events patterns that are emerging inside the process or external to the process. You can define actions or alerts to be triggered when a certain pattern of events emerges over a specified time period. Such types of rules need Complex Event Processing and are used in applications like Credit Card Fraud detection or Utility Demand Response. 4. User Interface Rules: In order to add dynamic behavior to UI or to keep users from making mistakes and enforcing policy, another mechanism available is UI rules. They are evaluated as the end user is filling out the web forms. These may include enabling and disabling of UI as per business policy. An example could be, if the age of a user is less than 13 years, disable credit card field and enable parental approval required checkbox. Your process may include many of such rule types. Oracle OpenWorld provides a unique opportunity to listen to Oracle Business Process Management Experts and Customers.  We will discuss business rules during various sessions in Oracle OpenWorld. Two of the sessions specifically focused on business rules are listed below: Accelerating an Implementation of Complex Worldwide Business Approval Rules Wednesday, Oct 3, 10:15 AM Moscone South – 305 Oracle Business Rules Use Cases Design and Testing Wednesday, Oct 3, 3:30 PM Marriott Marquis - Golden Gate C3   Oracle Business Process Management Track covers a variety of topics, and speakers covering technology, methodology and best practices. You can see the list of Business process Management sessions here. Come back to this blog for more coverage from Oracle OpenWorld!

    Read the article

  • What are the tradeoffs for using 'partial view models'?

    - by Kenny Evitt
    I've become aware of an itch due to some non-DRY code pertaining to view model classes in an (ASP.NET) MVC web application and I'm thinking of scratching my itch by organizing code in various 'partial view model' classes. By partial-view-model, I'm referring to a class like a view model class in an analogous way to how partial views are like views, i.e. a way to encapsulate common info and behavior. To strengthen the 'analogy', and to aid in visually organizing the code in my IDE, I was thinking of naming the partial-view-model classes with a _ prefix, e.g. _ParentItemViewModel. As a slightly more concrete example of why I'm thinking along these lines, imagine that I have a domain-model-entity class ParentItem and the user-friendly descriptive text that identifies these items to users is complex enough that I'd like to encapsulate that code in a method in a _ParentItemViewModel class, for which I can then include an object or a collection of objects of that class in all the view model classes for all the views that need to include a reference to a parent item, e.g. ChildItemViewModel can have a ParentItem property of the _ParentItemViewModel class type, so that in my ChildItemView view, I can use @Model.ParentItem.UserFriendlyDescription as desired, like breadcrumbs, links, etc. Edited 2014-02-06 09:56 -05 As a second example, imagine that I have entity classes SomeKindOfBatch, SomeKindOfBatchDetail, and SomeKindOfBatchDetailEvent, and a view model class and at least one view for each of those entities. Also, the example application covers a lot more than just some-kind-of-batches, so that it wouldn't really be useful or sensible to include info about a specific some-kind-of-batch in all of the project view model classes. But, like the above example, I have some code, say for generating a string for identifying a some-kind-of-batch in a user-friendly way, and I'd like to be able to use that in several views, say as breadcrumb text or text for a link. As a third example, I'll describe another pattern I'm currently using. I have a Contact entity class, but it's a fat class, with dozens of properties, and at least a dozen references to other fat classes. However, a lot of view model classes need properties for referencing a specific contact and most of those need other properties for collections of contacts, e.g. possible contacts to be referenced for some kind of relationship. Most of these view model classes only need a small fraction of all of the available contact info, basically just an ID and some kind of user-friendly description (i.e. a friendly name). It seems to be pretty useful to have a 'partial view model' class for contacts that all of these other view model classes can use. Maybe I'm just misunderstanding 'view model class' – I understand a view model class as always corresponding to a view. But maybe I'm assuming too much.

    Read the article

  • Retrieving Custom Attributes Using Reflection

    - by Scott Dorman
    The .NET Framework allows you to easily add metadata to your classes by using attributes. These attributes can be ones that the .NET Framework already provides, of which there are over 300, or you can create your own. Using reflection, the ways to retrieve the custom attributes of a type are: System.Reflection.MemberInfo public abstract object[] GetCustomAttributes(bool inherit); public abstract object[] GetCustomAttributes(Type attributeType, bool inherit); public abstract bool IsDefined(Type attributeType, bool inherit); System.Attribute public static Attribute[] GetCustomAttributes(MemberInfo member, bool inherit); public static bool IsDefined(MemberInfo element, Type attributeType, bool inherit); If you take the following simple class hierarchy: public abstract class BaseClass { private bool result;   [DefaultValue(false)] public virtual bool SimpleProperty { get { return this.result; } set { this.result = value; } } }   public class DerivedClass : BaseClass { public override bool SimpleProperty { get { return true; } set { base.SimpleProperty = value; } } } Given a PropertyInfo object (which is derived from MemberInfo, and represents a propery in reflection), you might expect that these methods would return the same result. Unfortunately, that isn’t the case. The MemberInfo methods strictly reflect the metadata definitions, ignoring the inherit parameter and not searching the inheritance chain when used with a PropertyInfo, EventInfo, or ParameterInfo object. It also returns all custom attribute instances, including those that don’t inherit from System.Attribute. The Attribute methods are closer to the implied behavior of the language (and probably closer to what you would naturally expect). They do respect the inherit parameter for PropertyInfo, EventInfo, and ParameterInfo objects and search the implied inheritance chain defined by the associated methods (in this case, the property accessors). These methods also only return custom attributes that inherit from System.Attribute. This is a fairly subtle difference that can produce very unexpected results if you aren’t careful. For example, to retrieve the custom  attributes defined on SimpleProperty, you could use code similar to this: PropertyInfo info = typeof(DerivedClass).GetProperty("SimpleProperty"); var attributeList1 = info.GetCustomAttributes(typeof(DefaultValueAttribute), true)); var attributeList2 = Attribute.GetCustomAttributes(info, typeof(DefaultValueAttribute), true));   The attributeList1 array will be empty while the attributeList2 array will contain the attribute instance, as expected. Technorati Tags: Reflection,Custom Attributes,PropertyInfo

    Read the article

  • How can I solve this SAT edge case?

    - by ssb
    I have an SAT implementation that basically works, and the fact that it works is what's giving me a few headaches. Basically there are some situations where using the SAT doesn't quite give me my intended result. One of these involves movement across multiple collision objects. Or to put it another way, if I have several collision boxes lined up next to each other such as to create something like a wall or a floor, movement along that surface while constantly applying force into that surface sometimes causes hangups, i.e. the player stops moving. This illustration shows what I mean: The 2 boxes on the bottom represent a floor, and the box on top/in the middle represents what my player is doing. There are several squares lined up as world obstacles to create some kind of wall, and if I move to the left across this surface while holding the down key then the issue arises. It only happens at the exact dividing point between two blocks. It only happens when moving to the left. At any rate I think I know why it happens, but I don't know how to solve it. Basically when I update my player movement I consider which directions are pressed, naturally, so if down is pressed I will add the speed to the Y component, and so on. But due to the way my SAT is implemented, when the penetration into the shape is the same from both sides it just goes with the smallest axis that it finds first, and it checks collisions against objects in the order that they were created because it goes through a foreach loop on the list of collidable objects. So this all adds up to the effect of if I'm moving to the left over a series of boxes while holding down, it will resolve me back to the right out of the first box and then up out of the box to the right of it, and this continues as long as the penetration is the same. The odd part is that this doesn't happen every time, which I am going to attribute to some oddity regarding multiplying velocity by the game time and causing some minor discrepancies between the lengths. Ultimately what this boils down to is that it will keep resolving me to the right and up, but this is technically expected behavior. All the solutions I can think of only address the symptoms of this problem and not the actual cause, such as not using many blocks to create walls or shapes, which is an option I'd like to keep open. I could also change which axis my algorithm defaults to, but that would just cause problems when going up/down along the walls. What can I do to fix this?

    Read the article

  • Suggestions for connecting .NET WPF GUI with Java SE Server aoo

    - by Sam Goldberg
    BACKGROUND We are building a Java (SE) trading application which will be monitoring market data and sending trade messages based on the market data, and also on user defined configuration parameters. We are planning to provide the user with a thin client, built in .NET (WPF) for managing the parameters, controlling the server behavior, and viewing the current state of the trading. The client doesn't need real-time updates; it will instead update the view once every few seconds (or whatever interval is configured by the user). The client has about 6 different operations it needs to perform with the server, for example: CRUD with configuration parameters query subset of the data receive updates of current positions from server It is possible that most of the different operations (except for receiving data) are just different flavors of managing the configuration parameters, but it's too early in our analysis for us to be sure. To connect the client with the server, we have been considering using: SOAP Web Service RESTful service building a custom TCP/IP based API (text or xml) (least preferred - but we use this approach with other applications we have) As best as I understand, pros and cons of the different web service flavors are: SOAP pro: totally automated in .NET (and Java), modifying server side interface require no code changes in communication layer, just running refresh on Web Service reference to regenerate the classes. con: more overhead in the communication layer sending more text, etc. We're not using J2EE container so maybe doesn't work so well with J2SE REST pro: lighter weight, less data. Has good .NET and Java support. (I don't have any real experience with this, so don't know what other benefits it has.) con: client will not be automatically aware if there are any new operations or properties added (?), so communication layer needs to be updated by developer if server interface changes. con: (both approaches) Server cannot really push updates to the client at regular intervals (?) (However, we won't mind if client polls the server to get updates.) QUESTION What are your opinions on the above options or suggestions for other ways to connect the 2 parts? (Ideally, we don't want to put much work into the communication layer, because it's not the significant part of the application so the more off-the-shelf and automated the better.)

    Read the article

  • Strange rendering in XNA/Monogame

    - by Gerhman
    I am trying to render G-Code generated for a 3d-printer as the printed product by reading the file as line segments and the drawing cylinders with the diameter of the filament around the segment. I think I have managed to do this part right because the vertex I am sending to the graphics device appear to have been processed correctly. My problem I think lies somewhere in the rendering. What basically happens is that when I start rotating my model in the X or Y axis then it renders perfectly for half of the rotation but then for the other half it has this weird effect where you start seeing through the outer filament into some of the shapes inside. This effect is the strongest with X rotations though. Here is a picture of the part of the rotation that looks correct: And here is one that looks horrible: I am still quite new to XNA and/Monogame and 3d programming as a whole. I have no idea what could possibly be causing this and even less of an idea of what this type of behavior is called. I am guessing this has something to do with rendering so have added the code for that part: protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); basicEffect.World = world; basicEffect.View = view; basicEffect.Projection = projection; basicEffect.VertexColorEnabled = true; basicEffect.EnableDefaultLighting(); GraphicsDevice.SetVertexBuffer(vertexBuffer); RasterizerState rasterizerState = new RasterizerState(); rasterizerState.CullMode = CullMode.CullClockwiseFace; rasterizerState.ScissorTestEnable = true; GraphicsDevice.RasterizerState = rasterizerState; foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Apply(); GraphicsDevice.DrawPrimitives(PrimitiveType.TriangleList, 0, vertexBuffer.VertexCount); } base.Draw(gameTime); } I don't know if it could be because I am shading something that does not really have a texture. I am using this custom vertex declaration I found on some tutorial that allows me to store a vertex with a position, color and normal: public struct VertexPositionColorNormal { public Vector3 Position; public Color Color; public Vector3 Normal; public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration ( new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0), new VertexElement(sizeof(float) * 3, VertexElementFormat.Color, VertexElementUsage.Color, 0), new VertexElement(sizeof(float) * 3 + 4, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0) ); } If any of you have ever seen this type of thing please help. Also, if you think that the problem might lay somewhere else in my code then please just request what part you would like to see in the comments section.

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • OWB 11gR2 &ndash; OMB and File Editing

    - by David Allan
    Here we will see how we can use the IDE for editing OMB scripts. The 11gR2 release is based on the common Oracle platform IDE used also by JDeveloper. It comes with a bunch of standard behavior for editing and rendering code. One of the lesser known things is that if you drop a text file into OWB you can edit it. So you can drop your tcl scripts right into OWB and edit in-place, and don’t need another IDE like Eclipse just for this task. Cool, so you have the file here. There may be no line numbers, you can toggle line numbers on by right clicking in the gutter. If we edit the file within the OWB IDE, the save is a little different from normal. OWB doesn’t normally manipulate files so things like ctrl-s to save, saves the OWB objects, but if you edit a file the closing of the file will ask if you want to save it – check it out. Now we enter the realm of ‘he who dares’…. Note the IDE doesn’t know about tcl files out of the box, so you see above there is no syntax highlighting. The code is identified by the extension… .java is java, .html is HTML etc. With OWB, the OMB scripts are tcl, we usually have .tcl extension on these files. One of the things we can do to trick up the syntax highlighting is to simply rename the file to have a .java suffix, then all of a sudden we get syntax highlighting, see the illustration here where side by side we see a the file with a .java extension and a .tcl extension. Not ideal pretending to be .java but gets us a way to having something more useful than notepad. We can then change the syntax highlighting such that we get Eclipse like highlighting within the IDE from the Tools Preferences option; You then get the Eclipse like rendering albeit using a little tweak on the file names… Might be useful if you are doing any kind of heavy duty OMB script development and just want a single IDE. The OMBPlus panel is then at hand for executing and testing it out.

    Read the article

  • Help needed with pyparsing [closed]

    - by Zearin
    Overview So, I’m in the middle of refactoring a project, and I’m separating out a bunch of parsing code. The code I’m concerned with is pyparsing. I have a very poor understanding of pyparsing, even after spending a lot of time reading through the official documentation. I’m having trouble because (1) pyparsing takes a (deliberately) unorthodox approach to parsing, and (2) I’m working on code I didn’t write, with poor comments, and a non-elementary set of existing grammars. (I can’t get in touch with the original author, either.) Failing Test I’m using PyVows to test my code. One of my tests is as follows (I think this is clear even if you’re unfamiliar with PyVows; let me know if it isn’t): def test_multiline_command_ends(self, topic): output = parsed_input('multiline command ends\n\n',topic) expect(output).to_equal( r'''['multiline', 'command ends', '\n', '\n'] - args: command ends - multiline_command: multiline - statement: ['multiline', 'command ends', '\n', '\n'] - args: command ends - multiline_command: multiline - terminator: ['\n', '\n'] - terminator: ['\n', '\n']''') But when I run the test, I get the following in the terminal: Failed Test Results Expected topic("['multiline', 'command ends']\n- args: command ends\n- command: multiline\n- statement: ['multiline', 'command ends']\n - args: command ends\n - command: multiline") to equal "['multiline', 'command ends', '\\n', '\\n']\n- args: command ends\n- multiline_command: multiline\n- statement: ['multiline', 'command ends', '\\n', '\\n']\n - args: command ends\n - multiline_command: multiline\n - terminator: ['\\n', '\\n']\n- terminator: ['\\n', '\\n']" Note: Since the output is to a Terminal, the expected output (the second one) has extra backslashes. This is normal. The test ran without issue before this piece of refactoring began. Expected Behavior The first line of output should match the second, but it doesn’t. Specifically, it’s not including the two newline characters in that first list object. So I’m getting this: "['multiline', 'command ends']\n- args: command ends\n- command: multiline\n- statement: ['multiline', 'command ends']\n - args: command ends\n - command: multiline" When I should be getting this: "['multiline', 'command ends', '\\n', '\\n']\n- args: command ends\n- multiline_command: multiline\n- statement: ['multiline', 'command ends', '\\n', '\\n']\n - args: command ends\n - multiline_command: multiline\n - terminator: ['\\n', '\\n']\n- terminator: ['\\n', '\\n']" Earlier in the code, there is also this statement: pyparsing.ParserElement.setDefaultWhitespaceChars(' \t') …Which I think should prevent exactly this kind of error. But I’m not sure. Even if the problem can’t be identified with certainty, simply narrowing down where the problem is would be a HUGE help. Please let me know how I might take a step or two towards fixing this.

    Read the article

  • Creating a Strong Bridge to the Post PC World

    - by Webgui
    Moving from location to location requires strong roads.  When crossing a barrier though, like a body of water or valley, we are required to build a strong bridge to get us from point A to point B in a way that is fast, safe, and easy.Yet we are not talking here about driving a car or riding a bus.  As we in the computing world are evidencing the move to the post-PC era, modernizing and migrating legacy applications to harness the power of HTML5 web, cloud and mobile is one of the most difficult challenges enterprises have faced.  Constant technological changes have weakened the business value of legacy systems, which have been developed over the years through huge investments.  There are several risks of course in this move.  Do you choose to simply rewrite code of legacy apps and transform them to HTML5 one by one?  This is quite expensive (according to research firm Gartner, the cost is $6 - $26 per line of code).  Of course, the pace of the rewriting process is very slow – around 170 lines per day for each developer – which slows down business productivity in a world in which no organization can afford to fall behind.  Other questions include whether the new cloud-based apps will have the same functionality as the trusted applications that worked for you for years.  How will the user experience be affected?  And of course, what about data security?  So we are faced with the challenge of building a sturdy bridge to stabilize our move in order to allow us to confidently and easily move our legacy applications into the post-PC era.   We at Gizmox are excited to release the first downloadable Community Technology Preview (CTP) of our Instant CloudMove Transposition Studio.Developers: To download the tool, and try it out for yourself, please visit http://www.visualwebgui.com/download.aspx.The CTP is the first and only tool-based solution allowing any Microsoft Visual Studio developer to extend VB6 and .NET enterprise client/server applications into HTML5 web, cloud and mobile applications, including the ability to upgrade their code and UI while doing so.   It is the only solution to fully replicate enterprise desktop applications behavior in the post-PC era.  With Instant CloudMove, the transposed application is available on any mobile or tablet device, browser and across any client operating system. Moreover, the extended application logic and data remains on the server behind the fire-wall and therefore the application’s front end is secured-by-design.   We would love for you to try out the tool for yourselves and let us know what you think.  How are you finding the move?

    Read the article

  • Detect, Analyze, Act – Fast!

    - by Ajay Khanna
    In fast changing business environment, it becomes crucial to identify business opportunities and business issues as soon as possible. If identified at the right time, business managers can address issues before they escalate to serious problems and can take advantage of the new opportunities before the competition does. Moreover, they have to be efficient to do this at the right cost. Success depends on how responsive organization is to emerging events and changing environment. These events can be customer issues, competition moves, changes in regulations, or changes in company policies. In order to be responsive in such situations, organizations need to first identify and track these situations. They can do that via business activity monitoring (BAM) and complex event processing (CEP). A unified monitoring dashboard helps put together a comprehensive picture of the situation in hand and provides deep insight to take proper actions. With CEP, businesses can connect all the relevant events, detect event patterns and take immediate actions using Business Process Management system.   So to be responsive we need: Real-Time Visibility with Business Activity Monitoring You can use BAM technology to monitor progress, track performance, meet service-level agreements (SLAs), manage exceptions, and issue alerts to an employee or application when a process is not functioning properly—all in real time. A unified monitoring dashboard helps you maintain a complete picture of each situation so you can take action effectively. BAM works hand in hand with BPM software to discover the significant activities that drive business success.   Real-Time Sense and Respond An event-driven BPM solution enables each step in a business process to be informed not only by the previous step, but also by any other step, data, and pattern of behavior deemed relevant to that step. This gives the company the ability to “sense and respond.” You can describe interesting event patterns and event correlations and monitor the business in real-time. Whenever a pre-defined pattern emerges you can take actions like raising alerts, notifications, or kicking off another business process. This synergy possible by integrating activity monitoring, event processing, and BPM makes it possible for managers to keep a finger on the pulse of their business. Business managers can now respond to customers faster, respond to competition faster, reduce fraud and do more cross-selling. Read more about being responsive in the whitepaper “The Instantly Responsive Enterprise: Integrating BPM and Complex Event Processing” in BPM Resource Kit.

    Read the article

  • How do I make the launcher progress bar work with my application?

    - by Kevin Gurney
    Background Research I am attempting to update the progress bar within the Unity launcher for a simple python/Gtk application created using Quickly called test; however, following the instructions in this video, I have not been able to successfully update the progress bar in the Unity launcher. In the Unity Integration video, Quickly was not used, so the way that the application was structured was slightly different, and the code used in the video does not seem to function properly without modification in a default Quickly ubuntu-application template application. Screenshots Here is a screenshot of the application icon as it is currently displayed in the Unity Launcher. Here is a screenshot of the kind of Unity launcher progress bar functionality that I would like (overlayed on mail icon: wiki.ubuntu.com). Code class TestWindow(Window): __gtype_name__ = "TestWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TestWindow, self).finish_initializing(builder) self.AboutDialog = AboutTestDialog self.PreferencesDialog = PreferencesTestDialog # Code for other initialization actions should be added here. self.add_launcher_integration() def add_launcher_integration(self): self.launcher = Unity.LauncherEntry.get_for_desktop_id("test.destkop") self.launcher.set_property("progress", 0.75) self.launcher.set_property("progress_visible", True) Expected Behavior I would expect the above code to show a progress bar that is 75% full overlayed on the icon for the test application in the Unity Launcher, but the application only runs and displays no progress bar when the command quickly run is executed. Problem Investigation I believe that the problem is that I am not properly getting a reference to the application's main window, however, I am not sure how to properly fix this problem. I also believe that the line: self.launcher = Unity.LauncherEntry.get_for_desktop_id("test.destkop") may be another source of complication because Quickly creates .desktop.in files rather than ordinary .desktop files, so I am not sure if that might be causing issues as well. Perhaps, another source of the issue is that I do not entirely understand the difference between .desktop and .desktop.in files. Does it possibly make sense to make a copy of the test.desktop.in file and rename it test.desktop, and place it in /usr/share/applications in order for get_for_desktop_id("test,desktop") to reference the correct .desktop file? Related Research Links Although, I am still not clear on the difference between .desktop and .desktop.in files, I have done some research on .desktop files and I have come across a couple of links: Desktop Entry Files (library.gnome.org) Desktop File Installation Directory (askubuntu.com) Unity Launcher API (wiki.ubuntu.com) Desktop Files: putting your application in the desktop menus (developer.gnome.org) Desktop Menu Specification (standards.freedesktop.org)

    Read the article

  • Five Best Practices for Going Mobile

    - by kellsey.ruppel
    76% of IT decision makers indicate mobile trends will have a high to extremely high impact on their organization. Has your organization gone mobile? Looking for some ideas on how to get started? John Brunswick shares his Best Practices for Going Mobile. Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables social networking, decision support, purchasing, content consumption, and location-based searching, extending experiences beyond what is available in traditional desktop computing.  Organizations rushing to ensure their brand's mobile availability may have taken a tactical approach to implementation, but strategically approaching mobile can enable greater returns on a similar investment and subsequent mobile projects. Here are some strategic considerations for delivering products, services, and information to mobile constituents.  Who, Why, and What? Ask yourself these key questions: who are you attempting to engage through the channel, and why are they engaging you through this channel? What experience will satisfy their needs? What outcome will support your core business? Will you be informing and/or transacting with this person?  Mobile Behavior. Mobile users generally engage for a very specific purpose. Ensure that access to information, services, and products is streamlined. Arriving on a mobile site through search only to be asked to search again frustrates users.  Mobile Is Broad. After establishing the audience and goal, review technology requirements to support them. Do you need a mobile Website, native mobile application, or both? Do you need to support multiple devices? Know the difference between native mobile and mobile Web.  Social Strategy. Users are more likely to trust reviews from peers than marketing information from a vendor. If you are selling products or services, be sure to make social integration part of your strategy.  Content Management. Consider a shared content platform strategy for Web and mobile projects. Fresh, consistent content is important for high-quality experiences. Read more from John Brunswick.We'll also be talking mobile strategies and how you can transform your portal experience and optimize online engagement -- making your portals more interactive and more engaging across multiple channels in a webcast tomorrow. We hope you'll join us!

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Less Can Be More In E-Commerce

    - by Michael Hylton
    Today’s consumers are inundated with product choices and vendors. Visit your favorite electronics retailer and see the vast assortment of flat panel televisions. Or the variety of detergents at the supermarket. All of this can be daunting for the average consumer who is looking for the products and services that interest them.  In a study titled “Choice is Demotivating: Can One Desire Too Much of a Good Thing”, the author, Sheena Iyengar found that participants actually reported greater subsequent satisfaction with their selections and wrote better essays when their original set of options had been limited. The same can be said for e-commerce and your website. Being able to quickly convert shoppers into buyers with effective merchandising is what makes leading businesses successful. You want to engage each individual visitor with the most-relevant content to drive higher conversions and order values while decreasing abandonment, but predicting what will resonate with each customer is difficult. In a world of choices, online merchandizing tools can help personalize, streamline, and refine what your customers view when they browse your online catalog. The key to being effective is to align your products and content as closely as possible with the customer’s needs. The goal on the home page is to promote your brand and push visitors farther into the site. The home page is often the starting point for repeat customers as well as for new visitors hoping to address their current product needs. As the customer selects different filters and narrows the choices, valuable information is being provided to the retailer about the customer’s current need—regardless of previous search behavior or what other customers with a similar demographic profile have purchased. Together with search pages, category browse pages are among the primary options available to customers as a means of finding products on your site. Once a customer reaches the product detail page, it is clear what that person desires, regardless of the segment the customer falls into. However, don’t disregard campaign-based promotions completely. A campaign targeted to all customers but featuring rule-driven promotions tied to the product can be effective. Click here to learn more about merchandizing techniques so what your customer sees if half full and not half empty.

    Read the article

  • Loading levels from .txt or .XML for XNA

    - by Dave Voyles
    I'm attemptin to add multiple levels to my pong game. I'd like to simply exchange a few elements with each level, nothing crazy. Just the background texture, the color of the AI paddle (the one on the right side), and the music. It seems that the best way to go about this is by utilizing the StreamReader to read and write the files from XML. If there is a better, or more efficient alternative way then I'm all for it. In looking over the XNA Starter Platformer Kit provided by MS it seems that they've done it in this manner as well. I'm perplexed by a few things, however, namely parts within the Level class which aren't commented. /// <summary> /// Iterates over every tile in the structure file and loads its /// appearance and behavior. This method also validates that the /// file is well-formed with a player start point, exit, etc. /// </summary> /// <param name="fileStream"> /// A stream containing the tile data. /// </param> private void LoadTiles(Stream fileStream) { // Load the level and ensure all of the lines are the same length. int width; List<string> lines = new List<string>(); using (StreamReader reader = new StreamReader(fileStream)) { string line = reader.ReadLine(); width = line.Length; while (line != null) { lines.Add(line); if (line.Length != width) throw new Exception(String.Format("The length of line {0} is different from all preceeding lines.", lines.Count)); line = reader.ReadLine(); } } What does width = line.Length mean exactly? I mean I know how it reads the line, but what difference does it make if one line is longer than any of the others? Finally, their levels are simply text files that look like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### It can't be that easy..... Can it?

    Read the article

  • Web Apps vs Web Services: 302s and 401s are not always good Friends

    - by Your DisplayName here!
    It is not very uncommon to have web sites that have web UX and services content. The UX part maybe uses WS-Federation (or some other redirect based mechanism). That means whenever an authorization error occurs (401 status code), this is picked by the corresponding redirect module and turned into a redirect (302) to the login page. All is good. But in services, when you emit a 401, you typically want that status code to travel back to the client agent, so it can do error handling. These two approaches conflict. If you think (like me) that you should separate UX and services into separate apps, you don’t need to read on. Just do it ;) If you need to mix both mechanisms in a single app – here’s how I solved it for a project. I sub classed the redirect module – this was in my case the WIF WS-Federation HTTP module and modified the OnAuthorizationFailed method. In there I check for a special HttpContext item, and if that is present, I suppress the redirect. Otherwise everything works as normal: class ServiceAwareWSFederationAuthenticationModule : WSFederationAuthenticationModule {     protected override void OnAuthorizationFailed(AuthorizationFailedEventArgs e)     {         base.OnAuthorizationFailed(e);         var isService = HttpContext.Current.Items[AdvertiseWcfInHttpPipelineBehavior.DefaultLabel];         if (isService != null)         {             e.RedirectToIdentityProvider = false;         }     } } Now the question is, how do you smuggle that value into the HttpContext. If it is a MVC based web service, that’s easy of course. In the case of WCF, one approach that worked for me was to set it in a service behavior (dispatch message inspector to be exact): public void BeforeSendReply( ref Message reply, object correlationState) {     if (HttpContext.Current != null)     {         HttpContext.Current.Items[DefaultLabel] = true;     } } HTH

    Read the article

  • Fraud and Anomaly Detection using Oracle Data Mining YouTube-like Video

    - by chberger
    I've created and recorded another YouTube-like presentation and "live" demos of Oracle Advanced Analytics Option, this time focusing on Fraud and Anomaly Detection using Oracle Data Mining.  [Note:  It is a large MP4 file that will open and play in place.  The sound quality is weak so you may need to turn up the volume.] Data is your most valuable asset. It represents the entire history of your organization and its interactions with your customers.  Predictive analytics leverages data to discover patterns, relationships and to help you even make informed predictions.   Oracle Data Mining (ODM) automatically discovers relationships hidden in data.  Predictive models and insights discovered with ODM address business problems such as:  predicting customer behavior, detecting fraud, analyzing market baskets, profiling and loyalty.  Oracle Data Mining, part of the Oracle Advanced Analytics (OAA) Option to the Oracle Database EE, embeds 12 high performance data mining algorithms in the SQL kernel of the Oracle Database. This eliminates data movement, delivers scalability and maintains security.  But, how do you find these very important needles or possibly fraudulent transactions and huge haystacks of data? Oracle Data Mining’s 1 Class Support Vector Machine algorithm is specifically designed to identify rare or anomalous records.  Oracle Data Mining's 1-Class SVM anomaly detection algorithm trains on what it believes to be considered “normal” records, build a descriptive and predictive model which can then be used to flags records that, on a multi-dimensional basis, appear to not fit in--or be different.  Combined with clustering techniques to sort transactions into more homogeneous sub-populations for more focused anomaly detection analysis and Oracle Business Intelligence, Enterprise Applications and/or real-time environments to "deploy" fraud detection, Oracle Data Mining delivers a powerful advanced analytical platform for solving important problems.  With OAA/ODM you can find suspicious expense report submissions, flag non-compliant tax submissions, fight fraud in healthcare claims and save huge amounts of money in fraudulent claims  and abuse.   This presentation and several brief demos will show Oracle Data Mining's fraud and anomaly detection capabilities.  

    Read the article

  • Some More New ADF Features in JDeveloper 11.1.2

    - by Steven Davelaar
    The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/> See section 3.5.2 in web user interface guide for more info. A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon). New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4. The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering. In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list. Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • Programming Language, Turing Completeness and Turing Machine

    - by Amumu
    A programming language is said to be Turing Completeness if it can successfully simulate a universal TM. Let's take functional programming language for example. In functional programming, function has highest priority over anything. You can pass functions around like any primitives or objects. This is called first class function. In functional programming, your function does not produce side effect i.e. output strings onto screen, change the state of variables outside of its scope. Each function has a copy of its own objects if the objects are passed from the outside, and the copied objects are returned once the function finishes its job. Each function written purely in functional style is completely independent to anything outside of it. Thus, the complexity of the overall system is reduced. This is referred as referential transparency. In functional programming, each function can have its local variables kept its values even after the function exits. This is done by the garbage collector. The value can be reused the next time the function is called again. This is called memoization. A function usually should solve only one thing. It should model only one algorithm to answer a problem. Do you think that a function in a functional language with above properties simulate a Turing Machines? Functions (= algorithms = Turing Machines) are able to be passed around as input and returned as output. TM also accepts and simulate other TMs Memoization models the set of states of a Turing Machine. The memorized variables can be used to determine states of a TM (i.e. which lines to execute, what behavior should it take in a give state ...). Also, you can use memoization to simulate your internal tape storage. In language like C/C++, when a function exits, you lose all of its internal data (unless you store it elsewhere outside of its scope). The set of symbols are the set of all strings in a programming language, which is the higher level and human-readable version of machine code (opcode) Start state is the beginning of the function. However, with memoization, start state can be determined by memoization or if you want, switch/if-else statement in imperative programming language. But then, you can't Final accepting state when the function returns a value, or rejects if an exception happens. Thus, the function (= algorithm = TM) is decidable. Otherwise, it's undecidable. I'm not sure about this. What do you think? Is my thinking true on all of this? The reason I bring function in functional programming because I think it's closer to the idea of TM. What experience with other programming languages do you have which make you feel the idea of TM and the ideas of Computer Science in general? Can you specify how you think?

    Read the article

  • Configuring keyboard input to eliminate unused diacritics

    - by David Cesarino
    I'd like to change the way diacritics work under Xubuntu. My problem My native language is pt-BR and my notebook has an american keyboard. Thus, I use ' and " followed by keys like u and c to achieve things like ú, ç and ü. It all works well. However, in the case of apostrophes and quotes, that creates a problem when I use ' followed by: letters that won't accept the acute accent ( ´ , ACUTE ACCENT -- 0x00B4) at all, like t; and letters that won't accept the acute in pt-BR, like r. In 1, ' with t does absolutely nothing. In 2, ' with r creates r (LATIN SMALL LETTER R WITH ACUTE -- 0x0155), which is used, afaik, for some eastern european languages like slovak. It isn't used in portuguese, just like ?, s, z, ?, ?, ?, n and all consonants (portuguese do not take diacritics in consonants). Question That said, is there a way to better support the portuguese-brazilian language using an american keyboard? It is very common here --- I actually prefer the american keyboard to our own, known as “ABNT”. Desired solution I'd like to deactivate unused diacritics, so case 2 would behave just like case 1. Additionally, if possible, I'd like case 1 to behave like it does under Windows. As an example, typing ' followed by t should write 't (acute followed by T) instead of doing nothing. About 2, in my humble opinion, doing nothing is counterproductive. I realize the behavior is reasonable according to logic ("there isn't t-acute, so please tell the computer to typeset apostrophe --- ', SPACE --- instead of acute). But from a human, practical point of view, I think it makes more sense (to me, at least). Additional comments I believe this also applies to spanish, french, italian and other western european latin languages. On the console (Ctrl+Alt+F?), case 1 is not a problem. I don't need to press space as apostrophes are automatically added. However, there, I'm unable to access cedilla (ç). Two completely different behaviors. If it's just a matter of customizing text config files (possibly creating a custom layout or whatever), I'd be glad to share my efforts. I just need guide on the "howto" part. Somehow Google only points me to the "enough" people (those who cope with the situation and think that it works "well enough"). And since I have definitely migrated to Linux/Xubuntu after years, I'd like to leave just as I like it (and I'm sure others as well). For example, if there is some kind of scripting or definition to tell the computer to do what I described, so be it.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >