Search Results

Search found 2176 results on 88 pages for 'wide'.

Page 76/88 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12

    - by Tony Berk
    There is plenty to SEE for CRM during OpenWorld in San Francisco, September 30 - October 4! As I mentioned in my earlier post about some of the keynote sessions, Is There a Cloud Over OpenWorld?, I'm going try to highlight some key sessions to help you find the best sessions for you. Interested to find out where Oracle CRM products are headed, then find your "roadmap" session. Here are some of the sessions in the CRM Track that you might want to consider attending for products you currently own or might consider for the future. I think you'll agree, there is quite a bit of investment going on across Oracle CRM. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM - 12:45 PM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM - 4:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM - 11:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Siebel CRM Overview, Strategy, and Roadmap (CON9700) - Oct 1, 12:15PM - 1:15PM. The world’s most complete CRM solution, Oracle’s Siebel CRM helps organizations differentiate their businesses. Come to this session to learn about the Siebel product roadmap and how Oracle is committed to accelerating the pace of innovation and value for its customers on this platform. Additionally, the session covers how Siebel customers can leverage many Oracle assets such as Oracle WebCenter Sites; InQuira, RightNow, and ATG/Endeca applications, and Oracle Policy Automation in conjunction with their current Siebel investments. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15 AM - 12:15 PM. Social is changing the customer experience! Come find out how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle’s social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Oracle CRM On Demand Strategy and Roadmap (CON9727) - Oct 1, 10:45AM - 11:45AM. Oracle CRM On Demand is a powerful cloud-based customer relationship management solution. Come to this session to learn directly from Oracle experts about future product plans and hear how Oracle is committed to accelerating the pace of innovation and value to its customers. Knowledge Management Roadmap and Strategy (CON9776) - Oct 1, 12:15PM - 1:15PM. Learn how to harness the knowledge created as a natural byproduct of day-to-day interactions to lower costs and improve customer experience by delivering the right answer at the right time across channels. This session includes an overview of Oracle’s product roadmap and vision for knowledge management for both the Oracle RightNow and Oracle Knowledge (formerly InQuira) product families. Oracle Policy Automation Roadmap: Supercharging the Customer Experience (CON9655) - Oct 1, 12:15PM - 1:15PM. Oracle Policy Automation delivers rapid customer value by streamlining the capture, analysis, and deployment of policies across every facet of the customer experience. This session discusses recent Oracle Policy Automation enhancements for policy analytics; the latest Oracle Policy Automation Connector for Siebel; and planned new capabilities, including availability with the Oracle RightNow product line. There is much more, so stay tuned for more highlights or check out the Content Catalog and search for your areas of interest. Which session are you most interested in? Make your suggestions! But no voting for Pearl Jam or Kings of Leon. Those are after hours! 

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • Seven Random Thoughts on JavaOne

    - by HecklerMark
    As most people reading this blog may know, last week was JavaOne. There are a lot of summary/recap articles popping up now, and while I didn't want to just "add to pile", I did want to share a few observations. Disclaimer: I am an Oracle employee, but most of these observations are either externally verifiable or based upon a collection of opinions from Oracle and non-Oracle attendees alike. Anyway, here are a few take-aways: The Java ecosystem is alive and well, with a breadth and depth that is impossible to adequately describe in a short post...or a long post, for that matter. If there is any one area within the Java language or JVM that you would like to - or need to - know more about, it's well-represented at J1. While there are several IDEs that are used to great effect by the developer community, NetBeans is on a roll. I lost count how many sessions mentioned or used NetBeans, but it was by far the dominant IDE in use at J1. As a recent re-convert to NetBeans, I wasn't surprised others liked it so well, only how many. OpenJDK, OpenJFX, etc. Many developers were understandably concerned with the change of sponsorship/leadership when Java creator and longtime steward Sun Microsystems was acquired by Oracle. The read I got from attendees regarding Oracle's stewardship was almost universally positive, and the push for "openness" is deep and wide within the current Java environs. Few would probably have imagined it to be this good, this soon. Someone observed that "Larry (Ellison) is competitive, and he wants to be the best...so if he wants to have a community, it will be the best community on the planet." Like any company, Oracle is bound to make missteps, but leadership seems to be striking an excellent balance between embracing open efforts and innovating in competitive paid offerings. JavaFX (2.x) isn't perfect or comprehensive, but a great many people (myself included) see great potential, are developing for it, and are really excited about where it is and where it may be headed. This is another part of the Java ecosystem that has impressive depth for being so new (JavaFX 1.x aside). If you haven't kicked the tires yet, give it a try! You'll be surprised at how capable and versatile it is, and you'll probably catch yourself smiling while coding again.  :-) JavaEE is everywhere. Not exactly a newsflash, but there is a lot of buzz around EE still/again/anew. Sessions ranged from updated component specs/technologies to Websockets/HTML5, from frameworks to profiles and application servers. Programming "server-side" Java isn't confined to the server (as you no doubt realize), and if you still consider JavaEE a cumbersome beast, you clearly haven't been using the last couple of versions. Download GlassFish or the WebLogic Zip distro (or another JavaEE 6 implementation) and treat yourself. JavaOne is not inexpensive, but to paraphrase an old saying, "If you think that's expensive, you should try ignorance." :-) I suppose it's possible to attend J1 and learn nothing, but you'd have to really work at it! Attending even a single session is bound to expand your horizons and make you approach your code, your problem domain, differently...even if it's a session about something you already know quite well. The various presenters offer vastly different perspectives and challenge you to re-think your own approach(es). And finally, if you think the scheduled sessions are great - and make no mistake, most are clearly outstanding - wait until you see what you pick up from what I like to call the "hallway sessions". Between the presentations, people freely mingle in the hallways, go to lunch and dinner together, and talk. And talk. And talk. Ideas flow freely, sparking other ideas and the "crowdsourcing" of knowledge in a way that is hard to imagine outside of a conference of this magnitude. Consider this the "GO" part of a "BOGO" (Buy One, Get One) offer: you buy the ticket to the "structured" part of JavaOne and get the hallway sessions at no additional charge. They're really that good. If you weren't able to make it to JavaOne this year, you can still watch/listen to the sessions online by visiting the JavaOne course catalog and clicking the media link(s) in the right column - another demonstration of Oracle's commitment to the Java community. But make plans to be there next year to get the full benefit! You'll be glad you did. All the best,Mark P.S. - I didn't mention several other exciting developments in areas like the embedded space and the "internet of things" (M2M), robotics, optimization, and the cloud (among others), but I think you get the idea. JavaOne == brainExpansion;  Hope to see you there next year!

    Read the article

  • Loading XML file containing leading zeros with SSIS preserving the zeros

    - by Compudicted
    Visiting the MSDN SQL Server Integration Services Forum oftentimes I could see that people would pop up asking this question: “why I am not able to load an element from an XML file that contains zeros so the leading/trailing zeros would remain intact?”. I started to suspect that such a trivial and often-required operation perhaps is being misunderstood by the developer community. I would also like to add that the whole state of affairs surrounding the XML today is probably also going to be increasingly affected by a motion of people who dislike XML in general and many aspects of it as XSD and XSLT invoke a negative reaction at best. Nevertheless, XML is in wide use today and its importance as a bridge between diverse systems is ever increasing. Therefore, I deiced to write up an example of loading an arbitrary XML file that contains leading zeros in one of its elements using SSIS so the leading zeros would be preserved keeping in mind the goal on simplicity into a table in SQL Server database. To start off bring up your BIDS (running as admin) and add a new Data Flow Task (DFT). This DFT will serve as container to adding our XML processing elements (besides, the XML Source is not available anywhere else other than from within the DFT). Double-click your DFT and drag and drop the XMS Source component from the Tool Box’s Data Flow Sources. Now, let the fun begin! Being inspired by the upcoming Christmas I created a simple XML file with one set of data that contains an imaginary SSN number of Rudolph containing several leading zeros like 0000003. This file can be viewed here. To configure the XML Source of course it is quite intuitive to point it to our XML file, next what the XML source needs is either an embedded schema (XSD) or it can generate one for us. In lack of the one I opted to auto-generate it for me and I ended up with an XSD that looked like: <?xml version="1.0"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="XMasEvent"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="CaseInfo"> <xs:complexType> <xs:sequence> <xs:element minOccurs="0" name="ID" type="xs:unsignedByte" /> <xs:element minOccurs="0" name="CreatedDate" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="LastName" type="xs:string" /> <xs:element minOccurs="0" name="FirstName" type="xs:string" /> <xs:element minOccurs="0" name="SSN" type="xs:unsignedByte" /> <!-- Becomes string -- > <xs:element minOccurs="0" name="DOB" type="xs:unsignedInt" /> <xs:element minOccurs="0" name="Event" type="xs:string" /> <xs:element minOccurs="0" name="ClosedDate" /> </xs:sequence> </xs:complexType> </xs:element> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> As an aside on the XML file: if your XML file does not contain the outer node (<XMasEvent>) then you may end up in a situation where you see just one field in the output. Now please note that the SSN element’s data type was chosen to be of unsignedByte (and this is for a reason). The reason is stemming from the fact all our figures in the element are digits, this is good, but this is not exactly what we need, because if we will attempt to load the data with this XSD then we are going to either get errors on the destination or most typically lose the leading zeros. So the next intuitive choice is to change the data type to string. Besides, if a SSIS package was already created based on this XSD and the data type change was done thereafter, one should re-set the metadata by right-clicking the XML Source and choosing “Advanced Editor” in which there is a refresh button at the bottom left which will do the trick. So far so good, we are ready to load our XML file, well actually yes, and no, in my experience typically some data conversion may be required. So depending on your data destination you may need to tweak the data types targeted. Let’s add a Data Conversion Task to our DFT. Your package should look like: To make the story short I only will cover the SSN field, so in my data source the target SQL Table has it as nchar(10) and we chose string in our XSD (yes, this is a big difference), under such circumstances the SSIS will complain. So will go and manipulate on the data type of SSN by making it Unicode String (DT_WSTR), World String per se. The conversion should look like: The peek at the Metadata: We are almost there, now all we need is to configure the destination. For simplicity I chose SQL Server Destination. The mapping is a breeze, F5 and I am able to insert my data into SQL Server now! Checking the zeros – they are all intact!

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • On Reflector Pricing

    - by Nick Harrison
    I have heard a lot of outrage over Red Gate's decision to charge for Reflector. In the interest of full disclosure, I am a fan of Red Gate. I have worked with them on several usability tests. They also sponsor Simple Talk where I publish articles. They are a good company. I am also a BIG fan of Reflector. I have used it since Lutz originally released it. I have written my own add-ins. I have written code to host reflector and use its object model in my own code. Reflector is a beautiful tool. The care that Lutz took to incorporate extensibility is amazing. I have never had difficulty convincing my fellow developers that it is a wonderful tool. Almost always, once anyone sees it in action, it becomes their favorite tool. This wide spread adoption and usability has made it an icon and pivotal pillar in the DotNet community. Even folks with the attitude that if it did not come out of Redmond then it must not be any good, still love it. It is ironic to hear everyone clamoring for it to be released as open source. Reflector was never open source, it was free, but you never were able to peruse the source code and contribute your own changes. You could not even use Reflector to view the source code. From the very beginning, it was never anyone's intention for just anyone to examine the source code and make their own contributions aside from the add-in model. Lutz chose to hand over the reins to Red Gate because he believed that they would be able to build on his original vision and keep the product viable and effective. He did not choose to make it open source, hoping that the community would be up to the challenge. The simplicity and elegance may well have been lost with the "design by committee" nature of open source. Despite being a wonderful and beloved tool, Reflector cannot be an easy tool to maintain. Maybe because it is so wonderful and beloved, it is even more difficult to maintain. At any rate, we have high expectations. Reflector must continue to be able to reasonably disassemble every language construct that the framework and core languages dream up. We want it to be fast, and we also want it to continue to be simple to use. No small order. Red Gate tried to keep the core product free. Sadly there was not enough interest in the Pro version to subsidize the rest of the expenses. $35 is a reasonable cost, more than reasonable. I have read the blog posts and forum posts complaining about the time associated with getting the expense approved. I have heard people complain about the cost being unreasonable if you are a developer from certain countries. Let's do the math. How much of a productivity boost is Reflector? How many hours do you think it saves you in a typical project? The next question is a little easier if you are a contractor or a consultant, but what is your hourly rate? If you are not a contractor, you can probably figure out an hourly rate. How long does it take to get a return on your investment? The value added proposition is not a difficult one to make. I have read people clamoring that Red Gate sucks and is evil. They complain about broken promises and conflicts of interest. Relax! Red Gate is not evil. The world is not coming to an end. The sun will come up tomorrow. I am sure that Red Gate will come up with options for volume licensing or site licensing for companies that want to get a licensed copy for their entire team. Don't panic, and I am sure that many great improvements are on the horizon. Switching the UI to WPF and including a tabbed interface opens up lots of possibilities.

    Read the article

  • Is Data Science “Science”?

    - by BuckWoody
    I hold the term “science” in very high esteem. I grew up on the Space Coast in Florida, and eventually worked at the Kennedy Space Center, surrounded by very intelligent people who worked in various scientific fields. Recently a new term has entered the computing dialog – “Data Scientist”. Since it’s not a standard term, it has a lot of definitions, and in fact has been disputed as a correct term. After all, the reasoning goes, if there’s no such thing as “Data Science” then how can there be a Data Scientist? This argument has been made before, albeit with a different term – “Computer Science”. In Peter Denning’s excellent article “Is Computer Science Science” (April  2005/Vol. 48, No. 4 COMMUNICATIONS OF THE ACM) there are many points that separate “science” from “engineering” and even “art”.  I won’t repeat the content of that article here (I recommend you read it on your own) but will leverage the points he makes there. Definition of Science To ask the question “is data science ‘science’” then we need to start with a definition of terms. Various references put the definition into the same basic areas: Study of the physical world Systematic and/or disciplined study of a subject area ...and then they include the things studied, the bodies of knowledge and so on. The word itself comes from Latin, and means merely “to know” or “to study to know”. Greek divides knowledge further into “truth” (episteme), and practical use or effects (tekhne). Normally computing falls into the second realm. Definition of Data Science And now a more controversial definition: Data Science. This term is so new and perhaps so niche that the major dictionaries haven’t yet picked it up (my OED reference is older – can’t afford to pop for the online registration at present). Researching the term's general use I created an amalgam of the definitions this way: “Studying and applying mathematical and other techniques to derive information from complex data sets.” Using this definition, data science certainly seems to be science - it's learning about and studying some object or area using systematic methods. But implicit within the definition is the word “application”, which makes the process more akin to engineering or even technology than science. In fact, I find that using these techniques – and data itself – part of science, not science itself. I leave out the concept of studying data patterns or algorithms as part of this discipline. That is actually a domain I see within research, mathematics or computer science. That of course is a type of science, but does not seek for practical applications. As part of the argument against calling it “Data Science”, some point to the scientific method of creating a hypothesis, testing with controls, testing results against the hypothesis, and documenting for repeatability.  These are not steps that we often take in working with data. We normally start with a question, and fit patterns and algorithms to predict outcomes and find correlations. In this way Data Science is more akin to statistics (and in fact makes heavy use of them) in the process rather than starting with an assumption and following on with it. So, is Data Science “Science”? I’m uncertain – and I’m uncertain it matters. Even if we are facing rampant “title inflation” these days (does anyone introduce themselves as a secretary or supervisor anymore?) I can tolerate the term at least from the intent that we use data to study problems across a wide spectrum, rather than restricting it to a single domain. And I also understand those who have worked hard to achieve the very honorable title of “scientist” who have issues with those who borrow the term without asking. What do you think? Science, or not? Does it matter?

    Read the article

  • PanelGridLayout - A Layout Revolution

    - by Duncan Mills
    With the most recent 11.1.2 patchset (11.1.2.3) there has been a lot of excitement around ADF Essentials (and rightly so), however, in all the fuss I didn't want an even more significant change to get missed - yes you read that correctly, a more significant change! I'm talking about the new panelGridLayout component, I can confidently say that this one of the most revolutionary components that we've introduced in 11g, even though it sounds rather boring. To be totally accurate, panelGrid was introduced in 11.1.2.2 but without any presence in the component palette or other design time support, so it was largely missed unless you read the release notes. However in this latest patchset it's finally front and center. Its time to explore - we (really) need to talk about layout.  Let's face it,with ADF Faces rich client, layout is a rather arcane pursuit, once you are a layout master, all bow before you, but it's more of an art than a science, and it is often, in fact, way too difficult to achieve what should (apparently) be a pretty simple. Here's a great example, it's a homework assignment I set for folks I'm teaching this stuff to:  The requirements for this layout are: The header is 80px high, the footer is 30px. These are both fixed.  The first section of the header containing the logo is 180px wide The logo is centered within the top left hand corner of the header  The title text is start aligned in the center zone of the header and will wrap if the browser window is narrowed. It should be aligned in the center of the vertical space  The about link is anchored to the right hand side of the browser with a 20px gap and again is center aligned vertically. It will move as the browser window is reduced in width. The footer has a right aligned copyright statement, again middle aligned within a 30px high footer region and with a 20px buffer to the right hand edge. It will move as the browser window is reduced in width. All remaining space is given to a central zone, which, in this case contains a panelSplitter. Expect that at some point in time you'll need a separate messages line in the center of the footer.  In the homework assigment I set I also stipulate that no inlineStyles can be used to control alignment or margins and no use of other taglibs (e.g. JSF HTML or Trinidad HTML). So, if we take this purist approach, that basic page layout (in my stock solution) requires 3 panelStretchLayouts, 5 panelGroupLayouts and 4 spacers - not including the spacer I use for the logo and the contents of the central zone splitter - phew! The point is that even a seemingly simple layout needs a bit of thinking about, particulatly when you consider strechting and browser re-size behavior. In fact, this little sample actually teaches you much of what you need to know to become vaguely competant at layouts in the framework. The underlying result of "the way things are" is that most of us reach for panelStretchLayout before even finishing the first sip of coffee as we embark on a new page design. In fact most pages you will see in any moderately complex ADF page will basically be nested panelStretchLayouts and panelGroupLayouts, sometimes many, many levels deep. So this is a problem, we've known this for some time and now we have a good solution. (I should point out that the oft-used Trinidad trh tags are not a particularly good solution as you're tie-ing yourself to an HTML table based layout in that case with a host of attendent issues in resize and bi-di behavior, but I digress.) So, tadaaa, I give to you panelGridLayout. PanelGrid, as the name suggests takes a grid like (dare I say slightly gridbag-like) approach to layout, dividing your layout into rows and colums with margins, sizing, stretch behaviour, colspans and rowspans all rolled in, all without the use of inlineStyle. As such, it provides for a much more powerful and consise way of defining a layout such as the one above that is actually simpler and much more logical to design. The basic building blocks are the panelGridLayout itself, gridRow and gridCell. Your content sits inside the cells inside the rows, all helpfully allowing both streching, valign and halign definitions without the need to nest further panelGroupLayouts. So much simpler!  If I break down the homework example above my nested comglomorate of 12 containers and spacers can be condensed down into a single panelGrid with 3 rows and 5 cell definitions (39 lines of source reduced to 24 in the case of the sample). What's more, the actual runtime representation in the browser DOM is much, much simpler, and clean, with basically one DIV per cell (Note that just because the panelGridLayout semantics looks like an HTML table does not mean that it's rendered that way!) . Another hidden benefit is the runtime cost. Because we can use a single layout to achieve much more complex geometries the client side layout code inside the browser is having to work a lot less. This will be a real benefit if your application needs to run on lower powered clients such as netbooks or tablets. So, it's time, if you're on 11.1.2.2 or above, to smile warmly at your panelStretchLayouts, wrap the blanket around it's knees and wheel it off to the Sunset Retirement Home for a well deserved rest. There's a new kid on the block and it wants to be your friend. 

    Read the article

  • Part 8: How to name EBS Customizations

    - by volker.eckardt(at)oracle.com
    You might wonder why I am discussing this here. The reason is simple: nearly every project has a bit different naming conventions, which makes a the life always a bit complicated (for developers, but also setup responsible, and also for consultants).  Although we always create a document to describe the technical object naming conventions, I have rarely seen a dedicated document  with functional naming conventions. To be precisely, from my stand point, there should always be one global naming definition for an implementation! Let me discuss some related questions: What is the best convention for the customization reference? How to name database objects (tables, packages etc.)? How to name functional objects like Value Sets, Concurrent Programs, etc. How to separate customizations from standard objects best? What is the best convention for the customization reference? The customization reference is the key you use to reference your customization from other lists, from the project plan etc. Usually it is something like XXHU_CONV_22 (HU=customer abbreviation, CONV=Conversion object #22) or XXFA_DEPRN_RPT_02 (FA=Fixed Assets, DEPRN=Short object group, here depreciation, RPT=Report, 02=2nd report in this area) As this is just a reference (not an object name yet), I would prefer the second option. XX=Customization, FA=Main EBS Module linked (you may have sometimes more, but FA is the main) DEPRN_RPT=Short name to specify the customization 02=a unique number Important here is that the HU isn’t used, because XX is enough to mark a custom object, and the 3rd+4th char can be used by the EBS module short name. How to name database objects (tables, packages etc.)? I was leading different developer teams, and I know that one common way is it to take the Customization reference and add more chars behind to classify the object (like _V for view and _T1 for triggers etc.). The only concern I have with this approach is the reusability. If you name your view XXFA_DEPRN_RPT_02_V, no one will by choice reuse this nice view, as it seams to be specific for this CEMLI. My suggestion is rather to name the view XXFA_DEPRN_PERIODS_V and allow herewith reusability for other CEMLIs (although the view will be deployed primarily with CEMLI package XXFA_DEPRN_RPT_02). (check also one of the following Blogs where I will talk about deployment.) How to name Value Sets, Concurrent Programs, etc. For Value Sets I would go with the same convention as for database objects, starting with XX<Module> …. For Concurrent Programs the situation is a bit different. This “object” is seen and used by a lot of users, and they will search for. In many projects it is common to start again with the company short name, or with XX. My proposal would differ. If you have created your own report and you name it “XX: Invoice Report”, the user has to remember that this report does not start with “I”, it starts with X. Would you like typing an X if you are looking for an Invoice report? No, you wouldn’t! So my advise would be to name it:   “Invoice Report (XXAP)”. Still we know it is custom (because of the XXAP), but the end user will type the key “i” to get it (and will see similar reports starting also with “i”). I hope that the general schema behind has now become obvious. How to separate customizations from standard objects best? I would not have this section here if the naming would not play an important role. Unfortunately, we can not always link a custom application to our own object, therefore the naming is really important. In the file system structure we use our $XXyy_TOP, in JAVA_TOP it is perhaps also “xx” in front. But in the database itself? Although there are different concepts in place, still many implementations are using the standard “apps” approach, means custom objects are stored in the apps schema (which should not cause any trouble). Final advise: review the naming conventions regularly, once a month. You may have to add more! And, publish them! To summarize: Technical and functional customized objects should always follow a naming convention. This naming convention should be project wide, and only one place shall be used to maintain (like in a Wiki). If the name is for the end user, rather put a customization identifier at the end; if it is an internal name, start with XX…

    Read the article

  • Where would a spam bot be located?

    - by Tim
    I have a hosted website using a free hosting service, I received an email this afternoon saying that I have been suspended because my account has been compromised. Basically, someone is using my email account to mass send spam. I've changed all the passwords and everything but when my Gmail pulls the emails from the host it's still downloading loads of spam messages that show like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after end of data: host 198.91.80.251 [198.91.80.251]: 554 5.6.0 id=23634-03 - Rejected by MTA on relaying, from MTA([127.0.0.1]:10030): 554 Error: This email address has lost rights to send email from the system ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from keenesystems.com ([66.135.33.211]:2370 helo=server211) by absolut.x10hosting.com with esmtpsa (TLSv1:RC4-MD5:128) (Exim 4.77) (envelope-from <[email protected]>) id 1TGwSW-002hHe-Lc for [email protected]; Wed, 26 Sep 2012 13:35:44 -0500 MIME-Version: 1.0 Date: Wed, 26 Sep 2012 13:35:43 -0500 X-Priority: 3 (Normal) X-Mailer: Ximian Evolution 3.9.9 (8.5.3-6) Subject: New staff members wanted at Auction It Online From: [email protected] Reply-To: [email protected] To: "Nadia Monti" <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Message-ID: <OUTLOOK-IDM-9aed7054-6a3e-e1a4-1d5c-3e73377652a6@server211> Date : 26 September 2012=0ATime : 13:35=0ASender : Dennise Halcomb Head = Office Manager of RJ Auction Drop-Off Int.=0A=0ANice to meet you Nadia M= onti=0A=0ARJ ADO Ltd., a USA based company, offers a significant amount = of goods worldwide for our customers on eBay and other auction venues. = Our company's main target is to provide a suitable and cost-effective se= rvice for any person, company or fundraising company. The main purpose o= f the administrative assistant / sales support representative is to cont= ribute to the sales force and add convenience to our cost-effective serv= ice dedicated to individuals, businesses, and organizations worldwide. O= ur HR department obtained your resume from one of the various job-orient= ed websites just to offer you this post.=0A=0AWorking Schedule: This is = a part time and home-based offer. You won't need to spend more than 3 ho= urs each day. Your =0Aschedule will be flexible.=0A=0ASalary: At the end= of the trial period (it lasts for 1 month) you will be paid 1,800 EUR. = With the average volume of clients your overall income will raise up to = 3,000 EUR per month. After the trial period is over your base salary wil= l grow up to 2,500 EUR per month, so you will earn 5% commission from th= e transactions completed.=0A=0AWhere?: Italy Wide. As it is a stay at ho= me position all the communication will be carried out via email and via = phone.=0A=0ARequirements: Access to the internet during the workday and = basic microsoft office skills are needed. Basic knowledge of English is = required (most of the contacts will be in English).=0A=0ACosts and Fees:= There are NO costs at any time for our employees. All fees related to t= his position are covered by the RJ ADO Co. Ltd..=0A=0AFurther Hiring Pro= cess: If you are interested in position we offer, please reply to this e= mail and send us the copy of your resume for verification.=0A=0AAfter re= viewing all of the received applications we will reply to successful app= licants only. Then we'll offer to these successful applicants a position= within our firm on a trial period basis for one month beginning from th= e date you sign a trial agreement. During this trial period you will rec= eive full guidance and support. Employees on a one monthly trial period = are evaluated at least one week prior to the end of their trial. During = the trial, your supervisor can recommend termination. At the end of the = trial period, the supervisor can offer continued employment, extension o= f trial period, or termination. After the trial period you may ask for m= ore hours or continue full-time.=0A=0AIf you are interested in this posi= tion, just reply to this email and send any questions you have and the c= opy of your resume for verification.=0A=0AThank You,=0AHR-Manager of RJ = ADO Co. Ltd.=0A=0APermission Settings=0AYou have been referred to RJ Auc= tion Drop-Off If you feel you received this email in error or do not wis= h to receive future messages, please reply to this message with "remove"= in the subject field. We will immediately update our database according= ly. =0AWe apologize for any inconvenience caused.=0A=0ARJ Auction Drop-O= ff Co. Ltd. I'm not aware of how this has happened. I'm not sure how anyone could have got hold of my password. It's a simple wordpress install, at some point recently my host went down and there was a fresh install of wordpress with default admin accounts, I have a feeling it could be something to do with this. My question is, even though I've changed all my passwords it's all still happening, is there annywhere in paticular this script would be stored on my host. I really can't deal with having my hosting account suspended and my email account sending all this spam.

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • Orchard shapeshifting

    - by Bertrand Le Roy
    I've shown in a previous post how to make it easier to change the layout template for specific contents or areas. But what if you want to change another shape template for specific pages, for example the main Content shape on the home page? Here's how. When we changed the layout, we had the problem that layout is created very early, so early that in fact it can't know what content is going to be rendered. For that reason, we had to rely on a filter and on the routing information to determine what layout template alternates to add. This time around, we are dealing with a content shape, a shape that is directly related to a content item. That makes things a little easier as we have access to a lot more information. What I'm going to do here is handle an event that is triggered every time a shape named "Content" is about to be displayed: public class ContentShapeProvider : IShapeTableProvider { public void Discover(ShapeTableBuilder builder) { builder.Describe("Content") .OnDisplaying(displaying => { // do stuff to the shape }); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This handler is implemented in a shape table provider which is where you do all shape related site-wide operations. The first thing we want to do in this event handler is check that we are on the front-end, displaying the "Detail" version, and not the "Summary" or the admin editor: if (displaying.ShapeMetadata.DisplayType == "Detail") { Now I want to provide the ability for the theme developer to provide an alternative template named "Content-HomePage.cshtml" for the home page. In order to determine if we are indeed on the home page I can look at the current site's home page property, which for the default home page provider contains the home page item's id at the end after a semicolon. Compare that with the content item id for the shape we are looking at and you can know if that's the homepage content item. Please note that if that content is also displayed on another page than the home page it will also get the alternate: we are altering at the shape level and not at the URL/routing level like we did with the layout. ContentItem contentItem = displaying.Shape.ContentItem; if (_workContextAccessor.GetContext().CurrentSite .HomePage.EndsWith(';' + contentItem.Id.ToString())) { _workContextAccessor is an injected instance of IWorkContextAccessor from which we can get the current site and its home page. Finally, once we've determined that we are in the specific conditions that we want to alter, we can add the alternate: displaying.ShapeMetadata.Alternates.Add("Content__HomePage"); And that's it really. Here's the full code for the shape provider that I added to a custom theme (but it could really live in any module or theme): using Orchard; using Orchard.ContentManagement; using Orchard.DisplayManagement.Descriptors; namespace CustomLayoutMachine.ShapeProviders { public class ContentShapeProvider : IShapeTableProvider { private readonly IWorkContextAccessor _workContextAccessor; public ContentShapeProvider( IWorkContextAccessor workContextAccessor) { _workContextAccessor = workContextAccessor; } public void Discover(ShapeTableBuilder builder) { builder.Describe("Content") .OnDisplaying(displaying => { if (displaying.ShapeMetadata.DisplayType == "Detail") { ContentItem contentItem = displaying.Shape.ContentItem; if (_workContextAccessor.GetContext() .CurrentSite.HomePage.EndsWith( ';' + contentItem.Id.ToString())) { displaying.ShapeMetadata.Alternates.Add( "Content__HomePage"); } } }); } } } The code for the custom theme, with layout and content alternates, can be downloaded from the following link: Orchard.Themes.CustomLayoutMachine.1.0.nupkg Note: this code is going to be used in the Contoso theme that should be available soon from the theme gallery.

    Read the article

  • UIView with IrrlichtScene - iOS

    - by user1459024
    i have a UIViewController in a Storyboard and want to draw a IrrlichtScene in this View Controller. My Code: WWSViewController.h #import <UIKit/UIKit.h> @interface WWSViewController : UIViewController { IBOutlet UILabel *errorLabel; } @end WWSViewController.mm #import "WWSViewController.h" #include "../../ressources/irrlicht/include/irrlicht.h" using namespace irr; using namespace core; using namespace scene; using namespace video; using namespace io; using namespace gui; @interface WWSViewController () @end @implementation WWSViewController -(void)awakeFromNib { errorLabel = [[UILabel alloc] init]; errorLabel.text = @""; IrrlichtDevice *device = createDevice( video::EDT_OGLES1, dimension2d<u32>(640, 480), 16, false, false, false, 0); /* Set the caption of the window to some nice text. Note that there is an 'L' in front of the string. The Irrlicht Engine uses wide character strings when displaying text. */ device->setWindowCaption(L"Hello World! - Irrlicht Engine Demo"); /* Get a pointer to the VideoDriver, the SceneManager and the graphical user interface environment, so that we do not always have to write device->getVideoDriver(), device->getSceneManager(), or device->getGUIEnvironment(). */ IVideoDriver* driver = device->getVideoDriver(); ISceneManager* smgr = device->getSceneManager(); IGUIEnvironment* guienv = device->getGUIEnvironment(); /* We add a hello world label to the window, using the GUI environment. The text is placed at the position (10,10) as top left corner and (260,22) as lower right corner. */ guienv->addStaticText(L"Hello World! This is the Irrlicht Software renderer!", rect<s32>(10,10,260,22), true); /* To show something interesting, we load a Quake 2 model and display it. We only have to get the Mesh from the Scene Manager with getMesh() and add a SceneNode to display the mesh with addAnimatedMeshSceneNode(). We check the return value of getMesh() to become aware of loading problems and other errors. Instead of writing the filename sydney.md2, it would also be possible to load a Maya object file (.obj), a complete Quake3 map (.bsp) or any other supported file format. By the way, that cool Quake 2 model called sydney was modelled by Brian Collins. */ IAnimatedMesh* mesh = smgr->getMesh("/Users/dbocksteger/Desktop/test/media/sydney.md2"); if (!mesh) { device->drop(); if (!errorLabel) { errorLabel = [[UILabel alloc] init]; } errorLabel.text = @"Konnte Mesh nicht laden."; return; } IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode( mesh ); /* To let the mesh look a little bit nicer, we change its material. We disable lighting because we do not have a dynamic light in here, and the mesh would be totally black otherwise. Then we set the frame loop, such that the predefined STAND animation is used. And last, we apply a texture to the mesh. Without it the mesh would be drawn using only a color. */ if (node) { node->setMaterialFlag(EMF_LIGHTING, false); node->setMD2Animation(scene::EMAT_STAND); node->setMaterialTexture( 0, driver->getTexture("/Users/dbocksteger/Desktop/test/media/sydney.bmp") ); } /* To look at the mesh, we place a camera into 3d space at the position (0, 30, -40). The camera looks from there to (0,5,0), which is approximately the place where our md2 model is. */ smgr->addCameraSceneNode(0, vector3df(0,30,-40), vector3df(0,5,0)); /* Ok, now we have set up the scene, lets draw everything: We run the device in a while() loop, until the device does not want to run any more. This would be when the user closes the window or presses ALT+F4 (or whatever keycode closes a window). */ while(device->run()) { /* Anything can be drawn between a beginScene() and an endScene() call. The beginScene() call clears the screen with a color and the depth buffer, if desired. Then we let the Scene Manager and the GUI Environment draw their content. With the endScene() call everything is presented on the screen. */ driver->beginScene(true, true, SColor(255,100,101,140)); smgr->drawAll(); guienv->drawAll(); driver->endScene(); } /* After we are done with the render loop, we have to delete the Irrlicht Device created before with createDevice(). In the Irrlicht Engine, you have to delete all objects you created with a method or function which starts with 'create'. The object is simply deleted by calling ->drop(). See the documentation at irr::IReferenceCounted::drop() for more information. */ device->drop(); } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. } - (void)viewDidUnload { [super viewDidUnload]; // Release any retained subviews of the main view. } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return (interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown); } @end Sadly the result is just a black View in the Simulator. :( Hope here is anyone who can explain me how i draw the scene in a UIView. Furthermore I'm getting this Error: Could not load sprite bank because the file does not exist: #DefaultFont How can i fix it ?

    Read the article

  • What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12

    - by Richard Lefebvre
    There is plenty to SEE for CRM during OpenWorld in San Francisco, September 30 - October 4! Here are some of the sessions in the CRM Track that you might want to consider attending for products you currently own or might consider for the future. I think you'll agree, there is quite a bit of investment going on across Oracle CRM. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM - 12:45 PM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM - 4:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM - 11:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Siebel CRM Overview, Strategy, and Roadmap (CON9700) - Oct 1, 12:15PM - 1:15PM. The world’s most complete CRM solution, Oracle’s Siebel CRM helps organizations differentiate their businesses. Come to this session to learn about the Siebel product roadmap and how Oracle is committed to accelerating the pace of innovation and value for its customers on this platform. Additionally, the session covers how Siebel customers can leverage many Oracle assets such as Oracle WebCenter Sites; InQuira, RightNow, and ATG/Endeca applications, and Oracle Policy Automation in conjunction with their current Siebel investments. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15 AM - 12:15 PM. Social is changing the customer experience! Come find out how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle’s social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Oracle CRM On Demand Strategy and Roadmap (CON9727) - Oct 1, 10:45AM - 11:45AM. Oracle CRM On Demand is a powerful cloud-based customer relationship management solution. Come to this session to learn directly from Oracle experts about future product plans and hear how Oracle is committed to accelerating the pace of innovation and value to its customers. Knowledge Management Roadmap and Strategy (CON9776) - Oct 1, 12:15PM - 1:15PM. Learn how to harness the knowledge created as a natural byproduct of day-to-day interactions to lower costs and improve customer experience by delivering the right answer at the right time across channels. This session includes an overview of Oracle’s product roadmap and vision for knowledge management for both the Oracle RightNow and Oracle Knowledge (formerly InQuira) product families. Oracle Policy Automation Roadmap: Supercharging the Customer Experience (CON9655) - Oct 1, 12:15PM - 1:15PM. Oracle Policy Automation delivers rapid customer value by streamlining the capture, analysis, and deployment of policies across every facet of the customer experience. This session discusses recent Oracle Policy Automation enhancements for policy analytics; the latest Oracle Policy Automation Connector for Siebel; and planned new capabilities, including availability with the Oracle RightNow product line. There is much more, so stay tuned for more highlights or check out the Content Catalog and search for your areas of interest. 

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Tip #19 Module Private Visibility in OSGi

    - by ByronNevins
    I hate public and protected methods and classes.  It requires so much work to change them in a huge project like GlassFish.  Not to mention that you may well have to support those APIs forever.  They are highly overused in GlassFish.  In fact I'd bet that > 95% of classes are marked as public for no good reason.  It's just (bad) habit is my guess. private and default visibility (I call it package-private) is easier to maintain.  It is much much easier to change such classes and methods around.  If you have ANY public method or public class in GlassFish you'll need to grep through a tremendous amount of source code to find all callers.  But even that won't be theoretically reliable.  What if a caller is using reflection to access public methods?  You may never find such usages. If you have package private methods, it's easy.  Simply grep through all the code in that one package.  As long as that package compiles ok you're all set.  There can' be any compile errors anywhere else.  It's a waste of time to even look around or build the "outside" world.  So you may be thinking: "Aha!  I'll just make my module have one giant package with all the java files.  Then I can use the default visibility and maintenance will be much easier.  But there's a problem.  You are wasting a very nice feature of java -- organizing code into separate packages.  It also makes the code much more encapsulated.  Unfortunately to share code between the packages you have no choice but to declare public visibility. What happens in practice is that a module ends up having tons of public classes and methods that are used exclusively inside the module.  Which finally brings me to the point of this blog:  If Only There Was A Module-Private Visibility Available Well, surprise!  There is such a mechanism.  If your project is running under OSGi that is.  Like GlassFish does!  With this mechanism you can easily add another level of visibility by telling OSGi exactly which public you want to be exposed outside of the module.  You get the best of both worlds: Better encapsulation of your code so that maintenance is easier and productivity is increased. Usage of public visibility inside the module so that you can encapsulate intra-module better with packages. How I do this in GlassFish: Carefully plan out at least one package that will contain "true" publics.  This is the package that will be exported by OSGi.  I recommend just one package. Here is how to tell OSGi to use it in GlassFish -- edit osgi.bundle like so:-exportcontents:     org.glassfish.mymodule.truepublics;  version=${project.osgi.version} Now all publics declared in any other packages will be visible module-wide but not outside the module. There is one caveat: Accessing "module-private" items outside of the module is controlled at run-time, not compile-time.  The compiler has no clue that a public in a dependent module isn't really public.  it will happily compile it.  At runtime you will definitely see fireworks.  The good news is that you don't have to wait for the code path that tries to use the "module-private" items to fire.  OSGi will complain loudly when that module gets loaded.  OSGi will refuse to load it.  You will see an error like this: remote failure: Error while loading FOO: Exception while adding the new configuration : Error occurred during deployment: Exception while loading the app : org.osgi.framework.BundleException: Unresolved constraint in bundle com.oracle.glassfish.miscreant.code [115]: Unable to resolve 115.0: missing requirement [115.0] osgi.wiring.package; (osgi.wiring.package=org.glassfish.mymodule.unexported). Please see server.log for more details. That is if you accidentally change code in module B to use a public that is really a "module-private" in module A, then you will see the error immediately when you try to test whatever you were changing in module B.

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 2: Platforms and Features

    - by Tim Murphy
    This is part 2 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. In the previous post I discussed what reasons a company might have for creating a smartphone application.  In this installment I will cover some of history and state of the different platforms as well as features that can be leveraged for building enterprise smartphone applications. Platforms Before you start choosing a platform to develop your solutions on it is good to understand how we got here and what features you can leverage. History To my memory we owe all of this to a product called the Apple Newton that came out in 1987. It was the first PDA and back then I was much more of an Apple fan.  I was very impressed with this device even though it never really went anywhere.  The Palm Pilot by US Robotics was the next major advancement in PDA. It had a simple short hand window that allowed for quick stylus entry.. Later, Windows CE came out and started the broadening of the PDA market. After that it was the Palm and CE operating systems that started showing up on cell phones and for some time these were the two dominant operating systems that were distributed with devices from multiple hardware vendors. Current The iPhone was the first smartphone to take away the stylus and give us a multi-touch interface.  It was a revolution in usability and really changed the attractiveness of smartphones for the general public.  This brought us to the beginning of the current state of the market with the concept of an online store that makes it easy for customers to get new features and functionality on demand. With Android, Google made this more than a one horse race.  Not only did they come to compete, their low cost actually made them the leading OS.  Of course what made Android so attractive also is its major fault.  It is so open that it has been a target for malware which leaves consumers exposed.  Fortunately for Google though, most consumers aren’t aware of the threat that they are under. Although Microsoft had put out one of the first smart phone operating systems with CE it had to play catch up and finally came out with the Windows Phone.  They have gone for a market approach between those of iOS and Android.  They support multiple hardware vendors like Google, but they kept a certification process for applications that is similar to Apple.  They also created a user interface that was different enough to give it a clear separation from the other two platforms. The result of all this is hundreds of millions of smartphones being sold monthly across all three platforms giving us a wide range of choices and challenges when it comes to developing solutions. Features So what are the features that make these devices flexible enough be considered for use in the enterprise? The biggest advantage of today's devices is network connectivity.  The ability to access information from multiple sources at a moment’s notice is critical for businesses.  Add to that the ability to communicate over a variety of text, voice and video modes and we have a powerful starting point. Every smartphone has a cameras and they are not just useful for posting to Instagram. We are seeing more applications such as Bing vision that allow us to scan just about any printed code or text to find information.  These capabilities have been made available to developers in the form of standard libraries for reading barcodes of just about an flavor and optical character recognition (OCR) interpretation. Bluetooth give us the ability to communicate with multiple devices. Whether these are headsets, keyboard or printers the wireless communication capabilities are just starting to evolve.  The more these wireless communication protocols grow, the more opportunities we will see to transfer data between users and a variety of devices. Local storage of information that can be called up even when the device cannot reach the network is the other big capability.  This give users the ability to work offline as well and transmit information when connections are restored. These are the tools that we have to work with to build applications that can be leveraged to gain a competitive advantage for companies that implement them. Coming Up In the third installment I will cover key concerns that you face when building enterprise smartphone apps. del.icio.us Tags: smartphones,enterprise smartphone Apps,architecture,iOS,Android,Windows Phone

    Read the article

  • Binding to the selected item in an ItemsControl

    - by Jensen
    I created a custom ComboBox as follows: (note, code is not correct but you should get the general idea.) The ComboBox contains 2 dependency properties which matter: TitleText and DescriptionText. <Grid> <TextBlock x:Name="Title"/> <Grid x:Name="CBG"> <ToggleButton/> <ContentPresenter/> <Popup/> </Grid> </Grid> I want to use this ComboBox to display a wide range of options. I created a class called Setting which inherits from DependencyObject to create usable items, I created a DataTemplate to bind the contents of this Settings object to my ComboBox and created a UserControl which contains an ItemControl which has as a template my previously mentioned DataTemplate. I can fill it with Setting objects. <DataTemplate x:Key="myDataTemplate"> <ComboBox TitleText="{Binding Title}" DescriptionText="{Binding DescriptionText}"/> </DataTemplate> <UserControl> <Grid> <StackPanel Grid.Column="0"> <ItemsControl Template="{StaticResource myDataTemplate}"> <Item> <Setting Title="Foo" Description="Bar"> <Option>Yes</Option><Option>No</Option> </Setting> </Item> </ItemsControl> </StackPanel> <StackPanel Grid.Column="1"> <TextBlock x:Name="Description"/> </StackPanel> </Grid> </UserControl> I would like to have the DescriptionText of the selected ComboBox (selected by either the IsFocus of the ComboBox control or the IsOpen property of the popup) to be placed in the Description TextBlock in my UserControl. One way I managed to achieve this was replacing my ItemsControl by a ListBox but this caused several issues: it always showed a scrollbar even though I disabled it, it wouldn't catch focus when my popup was open but only when I explicitly selected the item in my ListBox, when I enabled the OverridesDefaultStyle property the contents of the ListBox wouldn't show up at all, I had to re-theme the ListBox control to match my UserControl layout... What's the best and easiest way to get my DescriptionText to show up without using a ListBox or creating a custom Selector control (as that had the same effect as a ListBox)? The goal at the end is to loop through all the items (maybe get them into an ObservableCollection or some sort and to save them into my settings file.

    Read the article

  • Interesting issue with WCF wsHttpBinding through a Firewall

    - by Marko
    I have a web application deployed in an internet hosting provider. This web application consumes a WCF Service deployed at an IIS server located at my company’s application server, in order to have data access to the company’s database, the network guys allowed me to expose this WCF service through a firewall for security reasons. A diagram would look like this. [Hosted page] --- (Internet) --- |Firewall <Public IP>:<Port-X >| --- [IIS with WCF Service <Comp. Network Ip>:<Port-Y>] link text I also wanted to use wsHttpBinding to take advantage of its security features, and encrypt sensible information. After trying it out I get the following error: Exception Details: System.ServiceModel.EndpointNotFoundException: The message with To 'http://<IP>:<Port>/service/WCFService.svc' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. Doing some research I found out that wsHttpBinding uses WS-Addressing standards, and reading about this standard I learned that the SOAP header is enhanced to include tags like ‘MessageID’, ‘ReplyTo’, ‘Action’ and ‘To’. So I’m guessing that, because the client application endpoint specifies the Firewall IP address and Port, and the service replies with its internal network address which is different from the Firewall’s IP, then WS-Addressing fires the above message. Which I think it’s a very good security measure, but it’s not quite useful in my scenario. Quoting the WS-Addressing standard submission (http://www.w3.org/Submission/ws-addressing/) "Due to the range of network technologies currently in wide-spread use (e.g., NAT, DHCP, firewalls), many deployments cannot assign a meaningful global URI to a given endpoint. To allow these ‘anonymous’ endpoints to initiate message exchange patterns and receive replies, WS-Addressing defines the following well-known URI for use by endpoints that cannot have a stable, resolvable URI. http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous" HOW can I configure my wsHttpBinding Endpoint to address my Firewall’s IP and to ignore or bypass the address specified in the ‘To’ WS-Addressing tag in the SOAP message header? Or do I have to change something in my service endpoint configuration? Help and guidance will be much appreciated. Marko. P.S.: While I find any solution to this, I’m using basicHttpBinding with absolutely no problem of course.

    Read the article

  • Mercurial client error 255 and HTTP error 404 when attempting to push large files to server

    - by coderunner
    Problem: When attempting to push a changeset that contains 6 large files (.exe, .dmg, etc) to my remote server my client (MacHG) is reporting the error: "Error During Push. Mercurial reported error number 255: abort: HTTP Error 404: Not Found" What does the error even mean?! The only thing unique (that I can tell) about this commit is the size, type, and filenames of the files. How can I determine which exact file within the changeset is failing? How can I delete the corrupt changeset from the repository? Someone reported using "mq" extensions, but it looks overly complicated for what I'm trying to achieve. Background: I can push and pull the following: source files, directories, .class files and a .jar file to and from the server, using both MacHG and toirtoise HG. I successfully committed to my local repository the addition for the first time the 6 large .exe, .dmg etc installer files (about 130Mb total). In the following commit to my local repository, I removed ("untracked" / forget) the 6 files causing the problem, however the previous (failing) changeset is still queued to be pushed to the server (i.e. my local host is trying to push the "add" and then the "remove" to the remote server - and keep aligned with the "keep everything in history" philosophy of the source control system). I can commit .txt .java files etc using TortoiseHG from Windows PCs. I haven't actually testing committing or pushing the same large files using TortoiseHG. Please help! Setup: Client applications = MacHG v0.9.7 (SCM 1.5.4), and TortoiseHG v1.0.4 (SCM 1.5.4) Server = HTTPS, IIS7.5, Mercurial 1.5.4, Python 2.6.5, setup using these instructions: http://www.jeremyskinner.co.uk/mercurial-on-iis7/ In IIS7.5 the CGI handler is configured to handle ALL verbs (not just GET, POST and HEAD). My hgweb.cgi file on the server is as follows: #!/usr/bin/env python # # An example hgweb CGI script, edit as necessary # Path to repo or hgweb config to serve (see 'hg help hgweb') #config = "/path/to/repo/or/config" # Uncomment and adjust if Mercurial is not installed system-wide: #import sys; sys.path.insert(0, "/path/to/python/lib") # Uncomment to send python tracebacks to the browser if an error occurs: #import cgitb; cgitb.enable() from mercurial import demandimport; demandimport.enable() from mercurial.hgweb import hgweb, wsgicgi application = hgweb('C:\inetpub\wwwroot\hg\hgweb.config') wsgicgi.launch(application) My hgweb.config file on the server is as follows: [collections] C:\Mercurial Repositories = C:\Mercurial Repositories [web] baseurl = /hg allow_push = usernamea allow_push = usernameb

    Read the article

  • Subversion vision and roadmap

    - by gbjbaanb
    Recently C Michael Pilato of the core subversion team posted a mail to the subversion dev mailing list suggesting a vision and roadmap for the future of Subversion. Naturally, he wanted as much feedback and response as possible which is why I'm posting this here - to elicit some suggestions and contributions from you, the users of Subversion. Any comments are welcome, and I shall feedback a synopsis with a link to this question to the dev mailing list. Similarly, I've created a post on ServerFault to get feedback from the administator side of things too. So, without further ado: Vision The first thing on his "vision statement" is: Subversion has no future as a DVCS tool. Let's just get that out there. At least two very successful such tools exist already, and to squeeze another horse into that race would be a poor investment of energy and talent. There's no need to suggest distributed features for subversion. If you want a DVCS, there should be no ill-feeling if you migrate to Git, Mercurial or Bazaar. As he says, its pointless trying to make SVN like them when they already exist, especially when there are different usage patterns that SVN should be targetting. The vision for Subversion is: Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations. Roadmap Several ideas were suggested as being "very nice to have" and are offered as the starting point of a future roadmap. These are: Obliterate Shelve/Checkpoint Repository-dictated Configuration Rename Tracking Improved Merging Improved Tree Conflict Handling Enterprise Authentication Mechanisms Forward History Searching Log Message Templates If anyone has suggestions to add, or comments on these, the subversion community would welcome all of them. Community And lastly, there was a call for more people to become involved with Subversion development. As with most OSS projects it can be daunting to join, but there is now a push for more to be done to help. If you feel like you can contribute, please do so.

    Read the article

  • Problems related to showing MessageBox from non-GUI threads

    - by Hans Løken
    I'm working on a heavily data-bound Win.Forms application where I've found some strange behavior. The app has separate I/O threads receiving updates through asynchronous web-requests which it then sends to the main/GUI thread for processing and updating of application-wide data-stores (which in turn may be data-bound to various GUI-elements, etc.). The server at the other end of the web-requests requires periodic requests or the session times out. I've gone through several attempted solutions of dealing with thread-issues etc. and I've observed the following behavior: If I use Control.Invoke for sending updates from I/O-thread(s) to main-thread and this update causes a MessageBox to be shown the main form's message pump stops until the user clicks the ok-button. This also blocks the I/O-thread from continuing eventually leading to timeouts on the server. If I use Control.BeginInvoke for sending updates from I/O-thread(s) to main-thread the main form's message pump does not stop, but if the processing of an update leads to a messagebox being shown, the processing of the rest of that update is halted until the user clicks ok. Since the I/O-threads keep running and the message pump keeps processing messages several BeginInvoke's for updates may be called before the one with the message box is finished. This leads to out-of-sequence updates which is unacceptable. I/O-threads add updates to a blocking queue (very similar to http://stackoverflow.com/questions/530211/creating-a-blocking-queuet-in-net/530228#530228). GUI-thread uses a Forms.Timer that periodically applies all updates in the blocking queue. This solution solves both the problem of blocking I/O threads and sequentiality of updates i.e. next update will be never be started until previous is finished. However, there is a small performance cost as well as introducing a latency in showing updates that is unacceptable in the long run. I would like update-processing in the main-thread to be event-driven rather than polling. So to my question. How should I do this to: avoid blocking the I/O-threads guarantee that updates are finished in-sequence keep the main message pump running while showing a message box as a result of an update.

    Read the article

  • doofus ASP.Net master page scrollbar question

    - by Stephen Falken
    Like happens to all of us sometimes, I inherited some crappy code I have to fix. We need to center our pages on widescreen machines, so we have a master page layout div like so: .MasterLayout { width:1100px; height: 100%; position:absolute; left:50%; margin-left:-550px; vertical-align:top; } I removed most of the detailed attributes for readability here, but here's how the table for the master page is laid out: <table border="0" cellpadding="0" cellspacing="0" style="width: 100%;"> <tr> <td style="width: 100%" align="center" colspan="2"> </td> </tr> <tr> <td colspan="2" style="height: 20px; background-color: #333;"> <asp:SiteMapPath/> </td> </tr> <tr> <td style="width: 86px; height: 650px; background-color: #B5C7DE; margin: 6px;" valign="top"> <asp:Menu /> <asp:SiteMapDataSource /> </td> <td style="background-color:#ffffff; margin:5px; width:1000px;" valign="top"> <asp:contentplaceholder id="ContentPlaceHolder1" runat="server"/> </td> </tr> </table> When resizing the browser window, the horizontal scrollbar only reaches as far as the left edge of the <asp:contentplaceholder/> control, and the <asp:menu/> that's in the 86px wide <td> is hidden. How can I fix this problem? THANKS IN ADVANCE

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >