Search Results

Search found 2568 results on 103 pages for 'lookup webmaster'.

Page 92/103 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

  • Best way to store a large amount of game objects and update the ones onscreen

    - by user3002473
    Good afternoon guys! I'm a young beginner game developer working on my first large scale game project and I've run into a situation where I'm not quite sure what the best solution may be (if there is a lone solution). The question may be vague (if anyone can think of a better title after having read the question, please edit it) or broad but I'm not quite sure what to do and I thought it would help just to discuss the problem with people more educated in the field. Before we get started, here are some of the questions I've looked at for help in the past: Best way to keep track of game objects Elegant way to simulate large amounts of entities within a game world What is the most efficient container to store dynamic game objects in? I've also read articles about different data structures commonly used in games to store game objects such as this one about slot maps, but none of them are really what I'm looking for. Also, if it helps at all I'm using Python 3 to design the game. It has to be Python 3, if I could I would use C++ or Unityscript or something else, but I'm restricted to having to use Python 3. My game will be a form of side scroller shooter game. In said game the player will traverse large rooms with large amounts of enemies and other game objects to update (think some of the larger areas in Cave Story or Iji). The player obviously can't see the entire room all at once, so there is a viewport that follows the player around and renders only a selection of the room and the game objects that it contains. This is not a foreign concept. The part that's getting me confused has to do with how certain game objects are updated. Some of them are to be updated constantly, regardless of whether or not they can be seen. Other objects however are only to be updated when they are onscreen (for example, an enemy would only be updated to react to the player when it is onscreen or when it is in a certain range of the screen). Another problem is that game objects have to be easily referable by other game objects; something that happens in the player's update() method may affect another object in the world. Collision detection in games is always a serious problem. I need a way of containing the game objects such that it minimizes the number of cases when testing for collisions against one another. The final problem is that of creating and destroying game objects. I think this problem is pretty self explanatory. To store the game objects then I've considered a number of different methods. The original method I had was to simply store all the objects in a hash table by an id. This method was simple, and decently fast as it allows all the objects to be looked up in O(1) complexity, and also allows them to be deleted fairly easily. Hash collisions would not be a major problem; I wasn't originally planning on using computer generated ids to store the game objects I was going to rely on them all using ids given to them by the game designer (such names would be strings like 'Player' or 'EnemyWeapon4'), and even if I did use computer generated ids, if I used a decent hashing algorithm then the chances of collisions would be around 1 in 4 billion. The problem with using a hash table however is that it is inefficient in checking to see what objects are in range of the viewport. Considering the fact that certain game objects move (as well as the viewport itself), the only solution I could think of in order to only update objects that are in the viewport would be to iterate through every object in the hash table and check if it is in the viewport or not, updating only the ones that are in the valid area. This would be incredibly slow in scenarios where the amount of game objects exceeds 500, or even 200. The second solution was to store everything in a 2-d list. The world is partitioned up into cells (a tilemap essentially), where each cell or tile is the same size and is square. Each cell would contain a list of the game objects that are currently occupying it (each game object would be inserted into a cell depending on the center of the object's collision mask). A 2-d list would allow me to take the top-left and bottom-right corners of the viewport and easily grab a rectangular area of the grid containing only the cells containing entities that are in valid range to be updated. This method also solves the problem of collision detection; when I take an entity I can find the cell that it is currently in, then check only against entities in it's cell and the 8 cells around it. One problem with this system however is that it prohibits easy lookup of game objects. One solution I had would be to simultaneously keep a hash table that would contain all the positions of the objects in the 2-d list indexed by the id of said object. The major problem with a 2-d list is that it would need to be rebuilt every single game frame (along with the hash table of object positions), which may be a serious detriment to game speed. Both systems have ups and downs and seem to solve some of each other's problems, however using them both together doesn't seem like the best solution either. If anyone has any thoughts, ideas, suggestions, comments, opinions or solutions on new data structures or better implementations of the existing data structures I have in mind, please post, any and all criticism and help is welcome. Thanks in advance! EDIT: Please don't close the question because it has a bad title, I'm just bad with names!

    Read the article

  • Resolve Wrong IP from Domain Name only on certain networks

    - by Godric Seer
    I host a personal website on an old desktop that is LAMP based. There are several strange things about this problem so I will break it down into steps. Since I have a dynamic IP, I use no-ip to make sure I have a working domain name at all times. I use the automatic update client, but logged in and checked and my no-ip domain has the proper IP tied to it. Here is a link to the homepage through the no-ip domain for reference. Also, I do a ping and a traceroute on the no-ip domain and get: [eckertzs@localhost ~]$ ping -c 1 endradil.noip.me PING endradil.noip.me (65.24.215.99) 56(84) bytes of data. 64 bytes from endradil.noip.me (65.24.215.99): icmp_seq=1 ttl=64 time=2.23 ms --- endradil.noip.me ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 104ms rtt min/avg/max/mdev = 2.233/2.233/2.233/0.000 ms [eckertzs@localhost ~]$ traceroute endradil.noip.me traceroute to endradil.noip.me (65.24.215.99), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.755 ms 5.409 ms 5.380 ms 2 endradil.noip.me (65.24.215.99) 6.297 ms 9.543 ms 10.324 ms Using this domain, I can connect to my webserver without issue or interruption(the https is required to avoid a redirect serverside, but it works). I also have a domain I have bought on GoDaddy where I have a CNAME record forwarding the www subdomain to my no-ip domain. CNAME Record Host: www Points to: endradil.noip.me TTL: 1 hour For the past several weeks, I never had an issue using the GoDaddy domain to connect (ssh or https). As of the past few days, however, the GoDaddy domain has only worked intermittently, for a few minutes at a time and then will go down for hours at a time. I get server not found errors most of the time. Also, if I happen to be using the GoDaddy domain for an ssh connection, the connection will freeze. I have run online tests of the DNS and have seen that the website is visible by external servers and resolved to the correct IP. I also contacted GoDaddy support but they had no issues connecting to the website, and therefore did not see any issues. My personal computers (Windows desktop, linux laptop, android phone) all fail to connect when on my personal wifi. If I disconnect my phone from the wifi and use my AT&T wireless data, it can connect with both domains without issue. When I attempt to use Google webmaster tools to crawl the site using the GoDaddy domain, Google can not find the site. From my linux laptop, I have found some interesting results when I ping or traceroute the domain. The results from these: [eckertzs@localhost ~]$ ping -c 1 www.endradil.com PING www.endradil.com.Belkin (198.105.244.228) 56(84) bytes of data. --- www.endradil.com.Belkin ping statistics --- 1 packets transmitted, 0 received, 100% packet loss, time 10000ms [eckertzs@localhost ~]$ traceroute www.endradil.com traceroute to www.endradil.com (198.105.244.228), 30 hops max, 60 byte packets 1 . (192.168.2.1) 1.918 ms 2.806 ms 2.772 ms 2 cpe-65-24-208-1.insight.res.rr.com (65.24.208.1) 29.247 ms 29.654 ms 30.094 ms 3 cpe-69-23-24-117.new.res.rr.com (69.23.24.117) 15.597 ms 23.218 ms 23.581 ms 4 agg24.clmcohib01r.midwest.rr.com (65.29.1.52) 30.581 ms 30.556 ms 31.192 ms 5 be27.clevohek01r.midwest.rr.com (65.29.1.38) 30.580 ms 31.062 ms 31.038 ms 6 bu-ether25.atlngamq47w-bcr01.tbone.rr.com (107.14.19.38) 37.863 ms 68.844 ms 43.773 ms 7 107.14.17.178 (107.14.17.178) 51.866 ms 51.019 ms 50.989 ms 8 ae0.pr1.dca10.tbone.rr.com (107.14.17.200) 48.467 ms ae-4-0.a0.lax91.tbone.rr.com (66.109.1.113) 49.912 ms * 9 v413.core1.ash1.he.net (209.51.175.33) 60.270 ms 50.842 ms 50.819 ms 10 100ge5-1.core1.nyc4.he.net (184.105.223.166) 55.597 ms 56.045 ms 56.020 ms 11 xerocole-inc.10gigabitethernet12-4.core1.nyc4.he.net (216.66.41.242) 56.001 ms 55.969 ms 55.992 ms 12 * * * both show the incorrect IP. Also, the traceroute timesout on hops 12 through 255 (output truncated above). The traceroute using site24x7 works and shows reasonable results when run from their california server. From another linux box on a different network but in the same city as me (10 miles away), I still get timeout for traceroute, however the IP resolves correctly for the domain. From this I believe that the DNS result is incorrectly cached in either my router/modem or perhaps even at my ISP level. My question is, first, how do I find out exactly what is wrong, and second, how do I resolve it.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • jqGrid - dynamically load different drop down values for different rows depending on another column value

    - by Renso
    Goal: As we all know the jqGrid examples in the demo and the Wiki always refer to static values for drop down boxes. This of course is a personal preference but in dynamic design these values should be populated from the database/xml file, etc, ideally JSON formatted. Can you do this in jqGrid, yes, but with some custom coding which we will briefly show below (refer to some of my other blog entries for a more detailed discussion on this topic). What you CANNOT do in jqGrid, referrign here up and to version 3.8.x, is to load different drop down values for different rows in the jqGrid. Well, not without some trickery, which is what this discussion is about. Issue: Of course the issue is that jqGrid has been designed for high performance and thus I have no issue with them loading a  reference to a single drop down values list for every column. This way if you have 500 rows or one, each row only refers to a single list for that particuolar column. Nice! SO how easy would it be to simply traverse the grid once loaded on gridComplete or loadComplete and simply load the select tag's options from scratch, via ajax, from memory variable, hard coded etc? Impossible! Since their is no embedded SELECT tag within each cell containing the drop down values (remeber it only has a reference to that list in memory), all you will see when you inspect the cell prior to clicking on it, or even before and on beforeEditCell, is an empty <TD></TD>. When trying to load that list via a click event on that cell will temporarily load the list but jqGrid's last internal callback event will remove it and replace it with the old one, and you are back to square one. Solution: Yes, after spending a few hours on this found a solution to the problem that does not require any updates to jqGrid source code, thank GOD! Before we get into the coding details, the solution here can of course be customized to suite your specific needs, this one loads the entire drop down list that would be needed across all rows once into global variable. I then parse this object that contains all the properties I need to filter the rows depending on which ones I want the user to see based off of another cell value in that row. This only happens when clicking the cell, so no performance penalty. You may of course to load it via ajax when the user clicks the cell, but I found it more effecient to load the entire list as part of jqGrid's normal editoptions: { multiple: false, value: listingStatus } colModel options which again keeps only a reference to the sinlge list, no duplciation. Lets get into the meat and potatoes of it.         var acctId = $('#Id').val();         var data = $.ajax({ url: $('#ajaxGetAllMaterialsTrackingLookupDataUrl').val(), data: { accountId: acctId }, dataType: 'json', async: false, success: function(data, result) { if (!result) alert('Failure to retrieve the Alert related lookup data.'); } }).responseText;         var lookupData = eval('(' + data + ')');         var listingCategory = lookupData.ListingCategory;         var listingStatus = lookupData.ListingStatus;         var catList = '{';         $(lookupData.ListingCategory).each(function() {             catList += this.Id + ':"' + this.Name + '",';         });         catList += '}';         var lastsel;         var ignoreAlert = true;         $(item)         .jqGrid({             url: listURL,             postData: '',             datatype: "local",             colNames: ['Id', 'Name', 'Commission<br />Rep', 'Business<br />Group', 'Order<br />Date', 'Edit', 'TBD', 'Month', 'Year', 'Week', 'Product', 'Product<br />Type', 'Online/<br />Magazine', 'Materials', 'Special<br />Placement', 'Logo', 'Image', 'Text', 'Contact<br />Info', 'Everthing<br />In', 'Category', 'Status'],             colModel: [                 { name: 'Id', index: 'Id', hidden: true, hidedlg: true },                 { name: 'AccountName', index: 'AccountName', align: "left", resizable: true, search: true, width: 100 },                 { name: 'OnlineName', index: 'OnlineName', align: 'left', sortable: false, width: 80 },                 { name: 'ListingCategoryName', index: 'ListingCategoryName', width: 85, editable: true, hidden: false, edittype: "select", editoptions: { multiple: false, value: eval('(' + catList + ')') }, editrules: { required: false }, formatoptions: { disabled: false} }             ],             jsonReader: {                 root: "List",                 page: "CurrentPage",                 total: "TotalPages",                 records: "TotalRecords",                 userdata: "Errors",                 repeatitems: false,                 id: "0"             },             rowNum: $rows,             rowList: [10, 20, 50, 200, 500, 1000, 2000],             imgpath: jQueryImageRoot,             pager: $(item + 'Pager'),             shrinkToFit: true,             width: 1455,             recordtext: 'Traffic lines',             sortname: 'OrderDate',             viewrecords: true,             sortorder: "asc",             altRows: true,             cellEdit: true,             cellsubmit: "remote",             cellurl: editURL + '?rows=' + $rows + '&page=1',             loadComplete: function() {               },             gridComplete: function() {             },             loadError: function(xhr, st, err) {             },             afterEditCell: function(rowid, cellname, value, iRow, iCol) {                 var select = $(item).find('td.edit-cell select');                 $(item).find('td.edit-cell select option').each(function() {                     var option = $(this);                     var optionId = $(this).val();                     $(lookupData.ListingCategory).each(function() {                         if (this.Id == optionId) {                                                       if (this.OnlineName != $(item).getCell(rowid, 'OnlineName')) {                                 option.remove();                                 return false;                             }                         }                     });                 });             },             search: true,             searchdata: {},             caption: "List of all Traffic lines",             editurl: editURL + '?rows=' + $rows + '&page=1',             hiddengrid: hideGrid   Here is the JSON data returned via the ajax call during the jqGrid function call above (NOTE it must be { async: false}: {"ListingCategory":[{"Id":29,"Name":"Document Imaging & Management","OnlineName":"RF Globalnet"} ,{"Id":1,"Name":"Ancillary Department Hardware","OnlineName":"Healthcare Technology Online"} ,{"Id":2,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":3,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":4,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":5,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":6,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":7,"Name":"EMR/EHR Software","OnlineName":"Healthcare Technology Online"}]} I only need the Id and Name for the drop down list, but the third column in the JSON object is important, it is the only that I match up with the OnlineName in the jqGrid column, and then in the loop during afterEditCell simply remove the ones I don't want the user to see. That's it!

    Read the article

  • Node Serialization in NetBeans Platform 7.0

    - by Geertjan
    Node serialization makes sense when you're not interested in the data (since that should be serialized to a database), but in the state of the application. For example, when the application restarts, you want the last selected node to automatically be selected again. That's not the kind of information you'll want to store in a database, hence node serialization is not about data serialization but about application state serialization. I've written about this topic in October 2008, here and here, but want to show how to do this again, using NetBeans Platform 7.0. Somewhere I remember reading that this can't be done anymore and that's typically the best motivation for me, i.e., to prove that it can be done after all. Anyway, in a standard POJO/Node/BeanTreeView scenario, do the following: Remove the "@ConvertAsProperties" annotation at the top of the class, which you'll find there if you used the Window Component wizard. We're not going to use property-file based serialization, but plain old java.io.Serializable  instead. In the TopComponent, assuming it is named "UserExplorerTopComponent", typically at the end of the file, add the following: @Override public Object writeReplace() { //We want to work with one selected item only //and thanks to BeanTreeView.setSelectionMode, //only one node can be selected anyway: Handle handle = NodeOp.toHandles(em.getSelectedNodes())[0]; return new ResolvableHelper(handle); } public final static class ResolvableHelper implements Serializable { private static final long serialVersionUID = 1L; public Handle selectedHandle; private ResolvableHelper(Handle selectedHandle) { this.selectedHandle = selectedHandle; } public Object readResolve() { WindowManager.getDefault().invokeWhenUIReady(new Runnable() { @Override public void run() { try { //Get the TopComponent: UserExplorerTopComponent tc = (UserExplorerTopComponent) WindowManager.getDefault().findTopComponent("UserExplorerTopComponent"); //Get the display text to search for: String selectedDisplayName = selectedHandle.getNode().getDisplayName(); //Get the root, which is the parent of the node we want: Node root = tc.getExplorerManager().getRootContext(); //Find the node, by passing in the root with the display text: Node selectedNode = NodeOp.findPath(root, new String[]{selectedDisplayName}); //Set the explorer manager's selected node: tc.getExplorerManager().setSelectedNodes(new Node[]{selectedNode}); } catch (PropertyVetoException ex) { Exceptions.printStackTrace(ex); } catch (IOException ex) { Exceptions.printStackTrace(ex); } } }); return null; } } Assuming you have a node named "UserNode" for a type named "User" containing a property named "type", add the bits in bold below to your "UserNode": public class UserNode extends AbstractNode implements Serializable { static final long serialVersionUID = 1L; public UserNode(User key) { super(Children.LEAF); setName(key.getType()); } @Override public Handle getHandle() { return new CustomHandle(this, getName()); } public class CustomHandle implements Node.Handle { static final long serialVersionUID = 1L; private AbstractNode node = null; private final String searchString; public CustomHandle(AbstractNode node, String searchString) { this.node = node; this.searchString = searchString; } @Override public Node getNode() { node.setName(searchString); return node; } } } Run the application and select one of the user nodes. Close the application. Start it up again. The user node is not automatically selected, in fact, the window does not open, and you will see this in the output: Caused: java.io.InvalidClassException: org.serialization.sample.UserNode; no valid constructor Read this article and then you'll understand the need for this class: public class BaseNode extends AbstractNode { public BaseNode() { super(Children.LEAF); } public BaseNode(Children kids) { super(kids); } public BaseNode(Children kids, Lookup lkp) { super(kids, lkp); } } Now, instead of extending AbstractNode in your UserNode, extend BaseNode. Then the first non-serializable superclass of the UserNode has an explicitly declared no-args constructor, Do the same as the above for each node in the hierarchy that needs to be serialized. If you have multiple nodes needing serialization, you can share the "CustomHandle" inner class above between all the other nodes, while all the other nodes will also need to extend BaseNode (or provide their own non-serializable super class that explicitly declares a no-args constructor). Now, when I run the application, I select a node, then I close the application, restart it, and the previously selected node is automatically selected when the application has restarted.

    Read the article

  • Add Widget via Action in Toolbar

    - by Geertjan
    The question of the day comes from Vadim, who asks on the NetBeans Platform mailing list: "Looking for example showing how to add Widget to Scene, e.g. by toolbar button click." Well, the solution is very similar to this blog entry, where you see a solution provided by Jesse Glick for VisiTrend in Boston: https://blogs.oracle.com/geertjan/entry/zoom_capability Other relevant articles to read are as follows: http://netbeans.dzone.com/news/which-netbeans-platform-action http://netbeans.dzone.com/how-to-make-context-sensitive-actions Let's go through it step by step, with this result in the end, a solution involving 4 classes split (optionally, since a central feature of the NetBeans Platform is modularity) across multiple modules: The Customer object has a "name" String and the Droppable capability has a method "doDrop" which takes a Customer object: public interface Droppable {    void doDrop(Customer c);} In the TopComponent, we use "TopComponent.associateLookup" to publish an instance of "Droppable", which creates a new LabelWidget and adds it to the Scene in the TopComponent. Here's the TopComponent constructor: public CustomerCanvasTopComponent() {    initComponents();    setName(Bundle.CTL_CustomerCanvasTopComponent());    setToolTipText(Bundle.HINT_CustomerCanvasTopComponent());    final Scene scene = new Scene();    final LayerWidget layerWidget = new LayerWidget(scene);    Droppable d = new Droppable(){        @Override        public void doDrop(Customer c) {            LabelWidget customerWidget = new LabelWidget(scene, c.getTitle());            customerWidget.getActions().addAction(ActionFactory.createMoveAction());            layerWidget.addChild(customerWidget);            scene.validate();        }    };    scene.addChild(layerWidget);    jScrollPane1.setViewportView(scene.createView());    associateLookup(Lookups.singleton(d));} The Action is displayed in the toolbar and is enabled only if a Droppable is currently in the Lookup: @ActionID(        category = "Tools",        id = "org.customer.controler.AddCustomerAction")@ActionRegistration(        iconBase = "org/customer/controler/icon.png",        displayName = "#AddCustomerAction")@ActionReferences({    @ActionReference(path = "Toolbars/File", position = 300)})@NbBundle.Messages("AddCustomerAction=Add Customer")public final class AddCustomerAction implements ActionListener {    private final Droppable context;    public AddCustomerAction(Droppable droppable) {        this.context = droppable;    }    @Override    public void actionPerformed(ActionEvent ev) {        NotifyDescriptor.InputLine inputLine = new NotifyDescriptor.InputLine("Name:", "Data Entry");        Object result = DialogDisplayer.getDefault().notify(inputLine);        if (result == NotifyDescriptor.OK_OPTION) {            Customer customer = new Customer(inputLine.getInputText());            context.doDrop(customer);        }    }} Therefore, when the Properties window, for example, is selected, the Action will be disabled. (See the Zoomable example referred to in the link above for another example of this.) As you can see above, when the Action is invoked, a Droppable must be available (otherwise the Action would not have been enabled). The Droppable is obtained in the Action and a new Customer object is passed to its "doDrop" method. The above in pictures, take note of the enablement of the toolbar button with the red dot, on the extreme left of the toolbar in the screenshots below: The above shows the JButton is only enabled if the relevant TopComponent is active and, when the Action is invoked, the user can enter a name, after which a new LabelWidget is created in the Scene. The source code of the above is here: http://java.net/projects/nb-api-samples/sources/api-samples/show/versions/7.3/misc/WidgetCreationFromAction Note: Showing this as an MVC example is slightly misleading because, depending on which model object ("Customer" and "Droppable") you're looking at, the V and the C are different. From the point of view of "Customer", the TopComponent is the View, while the Action is the Controler, since it determines when the M is displayed. However, from the point of view of "Droppable", the TopComponent is the Controler, since it determines when the Action, i.e., which is in this case the View, displays the presence of the M.

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • Right-Time Retail Part 2

    - by David Dorf
    This is part two of the three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Integration Of course these real-time enabling technologies are only as good as the systems that utilize them, and it only takes one bottleneck to slow everyone else down. What good is an immediate stock-out notification if the supply chain can’t react until tomorrow? Since being formed in 2006, Oracle Retail has been not only adding more integrations between systems, but also modernizing integrations for appropriate speed. Notice I tossed in the word “appropriate.” Not everything needs to be real-time – again, we’re talking about Right-Time Retail. The speed of data capture, analysis, and execution must be synchronized or you’re wasting effort. Unfortunately, there isn’t an enterprise-wide dial that you can crank-up for your estate. You’ll need to improve things piecemeal, with people and processes as limiting factors while choosing the appropriate types of integrations. There are three integration styles we see in the retail industry. First is batch. I know, the word “batch” just sounds slow, but this pattern is less about velocity and more about volume. When there are large amounts of data to be moved, you’ll want to use batch processes. Our technology of choice here is Oracle Data Integrator (ODI), which provides a fast version of Extract-Transform-Load (ETL). Instead of the three-step process, the load and transform steps are combined to save time. ODI is a key technology for moving data into Retail Analytics where we can apply science. Performing analytics on each sale as it occurs doesn’t make any sense, so we batch up a statistically significant amount and submit all at once. The second style is fire-and-forget. For some types of data, we want the data to arrive ASAP but immediacy is not necessary. Speed is less important than guaranteed delivery, so we use message-oriented middleware available in both Weblogic and the Oracle database. For example, Point-of-Service transactions are queued for delivery to Central Office at corporate. If the network is offline, those transactions remain in the queue and will be delivered when the network returns. Transactions cannot be lost and they must be delivered in order. (Ever tried processing a return before the sale?) To enhance the standard queues, we offer the Retail Integration Bus (RIB) to help the management and monitoring of fire-and-forget messaging in the enterprise. The third style is request-response and is most commonly implemented as Web services. This is a synchronous message where the sender waits for a response. In this situation, the volume of data is small, guaranteed delivery is not necessary, but speed is very important. Examples include the website checking inventory, a price lookup, or processing a credit card authorization. The Oracle Service Bus (OSB) typically handles the routing of such messages, and we’ve enhanced its abilities with the Retail Service Backbone (RSB). To better understand these integration patterns and where they apply within the retail enterprise, we’re providing the Retail Reference Library (RRL) at no charge to Oracle Retail customers. The library is composed of a large number of industry business processes, including those necessary to support Commerce Anywhere, as well as detailed architectural diagrams. These diagrams allow implementers to understand the systems involved in integrations and the specific data payloads. Furthermore, with our upcoming release we’ll be providing a new tool called the Retail Integration Console (RIC) that allows IT to monitor and manage integrations from a single point. Using RIC, retailers can quickly discern where integration activity is occurring, volume statistics, average response times, and errors. The dashboards provide the ability to dive down into the architecture documentation to gather information all the way down to the specific payload. Retailers that want real-time integrations will also need real-time monitoring of those integrations to ensure service-level agreements are maintained. Part 3 looks at marketing.

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • Server HTTP Load times slow?

    - by cdog5000
    Hello, My server @ codemeh.com (HTTP Server) seems to be randomly loading slowly, I cannot tell if it just my forums (http://www.codemeh.com/forums/) that are loading slowly or if the WHOLE site is just loading slowly since my forums are the largest thing on the site right now. load average: 0.02, 0.17, 0.20 That is super low to my knowledge. I have tried Google Page Analytic plug-in for FireFox to solve the problem but nothing comes up that is VERY bad. If someone could investigate this for me since I am very new at apache and server configurations. Thanks! (top): PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7493 www-data 15 0 98.2m 16m 9092 S 3 0.8 0:27.24 apache2 26429 www-data 15 0 98.2m 15m 7392 S 3 0.7 0:03.45 apache2 26477 www-data 17 0 98.2m 15m 7396 S 3 0.7 0:03.16 apache2 1 root 15 0 2468 1384 1156 S 0 0.1 0:00.49 init 1367 root 25 0 2564 816 660 S 0 0.0 0:00.00 xinetd 1526 root 15 0 29576 5420 1976 S 0 0.3 1:02.69 fail2ban-server 3703 root 15 0 13512 9312 1696 S 0 0.4 0:11.59 miniserv.pl 3915 postfix 15 0 6056 1652 1320 S 0 0.1 0:00.00 pickup 4010 root 15 0 4548 1296 972 S 0 0.1 0:37.36 ntpd 7448 root 15 0 98528 26m 20m S 0 1.3 0:00.27 apache2 7454 www-data 18 0 33580 2616 368 S 0 0.1 0:00.04 apache2 7528 www-data 18 0 108m 24m 15m S 0 1.2 0:27.60 apache2 7974 root 16 0 8700 2728 2164 S 0 0.1 0:00.08 sshd 8123 cdog5000 15 0 8832 1596 896 S 0 0.1 0:00.00 sshd 8126 cdog5000 18 0 4484 1716 1384 S 0 0.1 0:00.00 bash 8141 cdog5000 15 0 2344 980 796 R 0 0.0 0:00.11 top 13461 root 15 0 8700 2728 2164 S 0 0.1 0:00.07 sshd 13567 cdog5000 18 0 8832 1492 896 S 0 0.1 0:00.33 sshd 13569 cdog5000 18 0 4484 1728 1388 S 0 0.1 0:00.09 bash 17983 root 15 0 4392 1268 988 S 0 0.1 0:00.00 su 17987 root 15 0 4516 1752 1380 S 0 0.1 0:00.09 bash 18081 www-data 15 0 98.2m 14m 6588 S 0 0.7 0:04.91 apache2 20000 www-data 15 0 98.3m 15m 8040 S 0 0.8 0:02.45 apache2 20019 www-data 15 0 98.2m 14m 6808 S 0 0.7 0:04.97 apache2 30343 root 15 0 3964 1012 764 S 0 0.0 0:00.03 vsftpd 30382 root 15 0 2304 908 716 S 0 0.0 0:00.62 cron 30401 mysql 17 0 141m 17m 5416 S 0 0.9 1:02.20 mysqld 30424 root 15 0 5472 912 504 S 0 0.0 0:00.04 sshd 30473 syslog 15 0 1916 676 536 S 0 0.0 0:01.02 syslogd 30611 amavis 15 0 33872 25m 2292 S 0 1.2 0:03.11 amavisd-new 31890 amavis 18 0 34888 24m 1792 S 0 1.2 0:00.00 amavisd-new 31891 amavis 18 0 34888 24m 1784 S 0 1.2 0:00.00 amavisd-new 32397 clamav 18 0 104m 84m 1272 S 0 4.1 1:06.46 clamd 32563 clamav 15 0 12832 5716 4440 S 0 0.3 0:01.29 freshclam 32573 root 23 0 1892 456 372 S 0 0.0 0:00.00 courierlogger 32575 root 18 0 2096 684 544 S 0 0.0 0:00.01 authdaemond 32583 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32584 root 24 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32598 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32599 root 25 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32604 root 18 0 1892 460 372 S 0 0.0 0:00.00 courierlogger 32605 root 18 0 2000 624 532 S 0 0.0 0:00.00 couriertcpd 32607 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32608 root 18 0 2096 260 116 S 0 0.0 0:00.03 authdaemond 32609 root 15 0 2308 404 256 S 0 0.0 0:00.03 authdaemond 32610 root 18 0 2096 260 116 S 0 0.0 0:00.02 authdaemond 32612 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32621 root 24 0 1892 364 284 S 0 0.0 0:00.00 courierlogger 32622 root 25 0 2000 608 516 S 0 0.0 0:00.00 couriertcpd 32633 root 15 0 105m 936 716 S 0 0.0 0:02.26 nscd 32719 root 16 0 6252 1680 1344 S 0 0.1 0:01.24 master 32738 postfix 15 0 6188 1776 1400 S 0 0.1 0:00.44 qmgr 32758 postfix 15 0 6492 2564 1788 S 0 0.1 0:00.14 tlsmgr (/etc/apache2/sites-available/default): NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/web1/web/ <Directory /var/www/web1/web/> Options Indexes MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have fail2ban server and I dont have any firewall at this point and time that I know of. SMF is 2.0 RC4 and apache version is 2.2.14. I run a MySQL server on another box in the same DC (Persistent Connection). I installed eAccelerator today and it didnt help.

    Read the article

  • DNS no longer works after server reboot

    - by Burning the Codeigniter
    Strangely enough, when I reboot my Ubuntu 12.04 server, the DNS no longer works, which makes the domain unavailable to access to my site. Normally the DNS should be working after a reboot, but this doesn't happen anymore. I use nginx to serve content, but nginx is already configured to work with my domains. What are the typical practises must I do after a reboot and how can I solve this issue I experience? I already have BIND, networking and resolvconf to boot when the server boots up. ; <<>> DiG 9.8.1-P1 <<>> mysite.com ;; global options: +cmd ;; connection timed out; no servers could be reached This is my output with dig $ttl 38400 mysite.com. IN SOA ns1.mysite.com. webmaster.mysite.com. ( 1055026205 6H 1H 5D 20M ) mysite.com. IN A xx.xx.xx.xx # Server IP *.mysite.com. IN A xx.xx.xx.xx # Server IP www.mysite.com. IN CNAME mysite.com. ns1.mysite.com. IN A xx.xx.xx.xx # Server 2nd IP ns2.mysite.com. IN A xx.xx.xx.xx # Server 3rd IP mysite.com. IN NS ns1.mysite.com. mysite.com. IN NS ns2.mysite.com. mail.mysite.com. IN MX 1 mysite.com. This is the contents of /etc/resolv.conf # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 85.17.150.123 nameserver 85.17.96.69 nameserver 62.212.64.122 search localdomain After using more dig commands, outputs: ; <<>> DiG 9.7.3-P3 <<>> @85.17.150.123 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 24847 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 2145 msec ;; SERVER: 85.17.150.123#53(85.17.150.123) ;; WHEN: Mon Nov 5 16:31:32 2012 ;; MSG SIZE rcvd: 30 ; <<>> DiG 9.7.3-P3 <<>> @85.17.96.69 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 27879 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 949 msec ;; SERVER: 85.17.96.69#53(85.17.96.69) ;; WHEN: Mon Nov 5 16:32:59 2012 ;; MSG SIZE rcvd: 30 ; <<>> DiG 9.7.3-P3 <<>> @62.212.64.122 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 29293 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 825 msec ;; SERVER: 62.212.64.122#53(62.212.64.122) ;; WHEN: Mon Nov 5 16:33:39 2012 ;; MSG SIZE rcvd: 30 With Google DNS (8.8.8.8): ; <<>> DiG 9.7.3-P3 <<>> @8.8.8.8 mysite.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 38498 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mysite.com. IN A ;; Query time: 3982 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Mon Nov 5 16:37:27 2012 ;; MSG SIZE rcvd: 30

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ComboBox Data Binding

    - by Geertjan
    Let's create a databound combobox, levering MVC in a desktop application. The result will be a combobox, provided by the NetBeans ChoiceView, that displays data retrieved from a database: What follows is not much different from the NetBeans Platform CRUD Application Tutorial and you're advised to consult that document if anything that follows isn't clear enough. One kind of interesting thing about the instructions that follow is that it shows that you're able to create an application where each element of the MVC architecture can be located within a separate module: Start by creating a new NetBeans Platform application named "MyApplication". Model We're going to start by generating JPA entity classes from a database connection. In the New Project wizard, choose "Java Class Library". Click Next. Name the Java Class Library "MyEntities". Click Finish. Right-click the MyEntities project, choose New, and then select "Entity Classes from Database". Work through the wizard, selecting the tables of interest from your database, and naming the package "entities". Click Finish. Now a JPA entity is created for each of the selected tables. In the Project Properties dialog of the project, choose "Copy Dependent Libraries" in the Packaging panel. Build the project. In your project's "dist" folder (visible in the Files window), you'll now see a JAR, together with a "lib" folder that contains the JARs you'll need. In your NetBeans Platform application, create a module named "MyModel", with code name base "org.my.model". Right-click the project, choose Properties, and in the "Libraries" panel, click Add Dependency button in the Wrapped JARs subtab to add all the JARs from the previous step to the module. Also include "derby-client.jar" or the equivalent driver for your database connection to the module. Controler In your NetBeans Platform application, create a module named "MyControler", with code name base "org.my.controler". Right-click the module's Libraries node, in the Projects window, and add a dependency on "Explorer & Property Sheet API". In the MyControler module, create a class with this content: package org.my.controler; import org.openide.explorer.ExplorerManager; public class MyUtils { static ExplorerManager controler; public static ExplorerManager getControler() { if (controler == null) { controler = new ExplorerManager(); } return controler; } } View In your NetBeans Platform application, create a module named "MyView", with code name base "org.my.view".  Create a new Window Component, in "explorer" view, for example, let it open on startup, with class name prefix "MyView". Add dependencies on the Nodes API and on the Explorer & Property Sheet API. Also add dependencies on the "MyModel" module and the "MyControler" module. Before doing so, in the "MyModel" module, make the "entities" package and the "javax.persistence" packages public (in the Libraries panel of the Project Properties dialog) and make the one package that you have in the "MyControler" package public too. Define the top part of the MyViewTopComponent as follows: public final class MyViewTopComponent extends TopComponent implements ExplorerManager.Provider { ExplorerManager controler = MyUtils.getControler(); public MyViewTopComponent() { initComponents(); setName(Bundle.CTL_MyViewTopComponent()); setToolTipText(Bundle.HINT_MyViewTopComponent()); setLayout(new BoxLayout(this, BoxLayout.PAGE_AXIS)); controler.setRootContext(new AbstractNode(Children.create(new ChildFactory<Customer>() { @Override protected boolean createKeys(List list) { EntityManager entityManager = Persistence. createEntityManagerFactory("MyEntitiesPU").createEntityManager(); Query query = entityManager.createNamedQuery("Customer.findAll"); list.addAll(query.getResultList()); return true; } @Override protected Node createNodeForKey(Customer key) { Node customerNode = new AbstractNode(Children.LEAF, Lookups.singleton(key)); customerNode.setDisplayName(key.getName()); return customerNode; } }, true))); controler.addPropertyChangeListener(new PropertyChangeListener() { @Override public void propertyChange(PropertyChangeEvent evt) { Customer selectedCustomer = controler.getSelectedNodes()[0].getLookup().lookup(Customer.class); StatusDisplayer.getDefault().setStatusText(selectedCustomer.getName()); } }); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Customers: ")); row1.add(new ChoiceView()); add(row1); } @Override public ExplorerManager getExplorerManager() { return controler; } ... ... ... Now run the application and you'll see the same as the image with which this blog entry started.

    Read the article

  • Django - no module named app

    - by Koran
    Hi, I have been trying to get an application written in django working - but it is not working at all. I have been working on for some time too - and it is working on dev-server perfectly. But I am unable to put in the production env (apahce). My project name is apstat and the app name is basic. I try to access it as following Blockquote http://hostname/apstat But it shows the following error: MOD_PYTHON ERROR ProcessId: 6002 Interpreter: 'domU-12-31-39-06-DD-F4.compute-1.internal' ServerName: 'domU-12-31-39-06-DD-F4.compute-1.internal' DocumentRoot: '/home/ubuntu/server/' URI: '/apstat/' Location: '/apstat' Directory: None Filename: '/home/ubuntu/server/apstat/' PathInfo: '' Phase: 'PythonHandler' Handler: 'django.core.handlers.modpython' Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 228, in handler return ModPythonHandler()(req) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 201, in __call__ response = self.get_response(request) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 134, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 154, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 40, in technical_500_response html = reporter.get_traceback_html() File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 114, in get_traceback_html return t.render(c) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 178, in render return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 779, in render bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 81, in render_node raise wrapped TemplateSyntaxError: Caught an exception while rendering: No module named basic Original Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 87, in render output = force_unicode(self.filter_expression.resolve(context)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 572, in resolve new_obj = func(obj, *arg_vals) File "/usr/lib/pymodules/python2.6/django/template/defaultfilters.py", line 687, in date return format(value, arg) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 269, in format return df.format(format_string) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 175, in r return self.format('D, j M Y H:i:s O') File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/encoding.py", line 71, in force_unicode s = unicode(s) File "/usr/lib/pymodules/python2.6/django/utils/functional.py", line 201, in __unicode_cast return self.__func(*self.__args, **self.__kw) File "/usr/lib/pymodules/python2.6/django/utils/translation/__init__.py", line 62, in ugettext return real_ugettext(message) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 286, in ugettext return do_translate(message, 'ugettext') File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 276, in do_translate _default = translation(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 180, in _fetch app = import_module(appname) File "/usr/lib/pymodules/python2.6/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named basic My settings.py is as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'apstat.basic', 'django.contrib.admin', ) If I remove the apstat.basic, it goes through, but that is not a solution. Is it something I am doing in apache? My apache - settings are - <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/ubuntu/server/ <Directory /> Options None AllowOverride None </Directory> <Directory /home/ubuntu/server/apstat> AllowOverride None Order allow,deny allow from all </Directory> <Location "/apstat"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE apstat.settings PythonOption django.root /home/ubuntu/server/ PythonDebug On PythonPath "['/home/ubuntu/server/'] + sys.path" </Location> </VirtualHost> I have now sat for more than a day on this. If someone can help me out, it would be very nice.

    Read the article

  • jQuery Map Highlight - works fine at DOM ready but failed when loaded by AJAX

    - by Michael Mao
    Hi all: This is uni assignment and I have already done some stuff. Please go to the password protected directory on : my server Enter username "uts" and password "10479475", both without quotes, into the prompt and you shall be able to see the webpage. Basically, if you hover your mouse on top of the contents in worldmap to the upperleft corner, you can see the underneath area is "highlighted" by a gray region and a red border. This is done using one jQuery plugin : at here This part works fine, however, after I use jQuery to load the specific continent map asynchronously, the newly loaded image cannot work correctly. Tested under Firebug, I can see the plugin doesn't "like" the new image cause I cannot find the canvas or other auto-generated stuff which can be founded around the worldmap. All the functionality is done in master.js, I believe you can just download a copy and check the code there. I do hope that I have followed the tutorials on the plugin's doc page, but I just cannot get through the final stage. Code used for worldmap in html: <img id="worldmap" src="./img/world.gif" alt="world.gif" width="398" height="200" class="map" usemap="#worldmap"/> <map name="worldmap"> <area class='continent' href="#" shape="poly" title="North_America" coords="1,39, 40,23, 123,13, 164,17, 159,40, 84,98, 64,111, 29,89" /> </map> Code used for worldmap in master.js //when DOM is ready, do something $(document).ready(function() { $('.map').maphilight(); //call the map highlight main function } On contrast, code used for specific continent map: //helper function to load specific continent map using AJAX function loadContinentMap(continent) { $('#continent-map-wrapper').children().remove(); //remove all children nodes first //inspiration taken from online : http://jqueryfordesigners.com/image-loading/ $('#continent-map-wrapper').append("<div id='loader' class='loading'><div>"); var img = new Image(); // wrap our new image in jQuery, then: // once the image has loaded, execute this code $(img).load(function () { $(this).hide(); // set the image hidden by default // with the holding div #loader, apply: // remove the loading class (so no background spinner), // then insert our image $('#loader').removeClass('loading').append(this); // fade our image in to create a nice effect $(this).fadeIn(); }).error(function () { // if there was an error loading the image, react accordingly // notify the user that the image could not be loaded $('#loader').removeClass('loading').append("<h1><div class='errormsg'>Loading image failed, please try again! If same error persists, please contact webmaster.</div></h1>"); }) //set a series of attributes to the img tag, these are for the map high lighting plugin. .attr('id', continent).attr('alt', '' + continent).attr('width', '576').attr('height', '300') .attr('usemap', '#city_' + continent).attr('class', 'citymap').attr('src', './img/' + continent + '.gif'); // *finally*, set the src attribute of the new image to our image //After image is loaded, apply the map highlighting plugin function again. $('.citymap').maphilight(); $('area.citymap').click(function() { alert($(this).attr('title') + ' is clicked!'); }); } Sorry about the messy code, havn't refactored it yet. I am wondering why the canvas disappers for the continent map. Did I do anything wrong. Any hint is much appreciated and thanks for any suggestion in advance!

    Read the article

  • Getting 500 Error when trying to access Rails application through Apache2

    - by cojones
    Hey, I'm using Apache2 as proxy and mongrel_cluster as server for my Rails applications. When I try to access it by typing in the url I get a 500 "Internal Server Error" but when try to locally access the website with "lynx http://localhost:8200" it works. This is my config: <Proxy balancer://sportfreundewitold_cluster> BalancerMember http://127.0.0.1:8200 BalancerMember http://127.0.0.1:8201 </Proxy> # httpd [example.org] dmn entry BEGIN. <VirtualHost x.x.x.x:80> <IfModule suexec_module> SuexecUserGroup vu2025 vu2025 </IfModule> ServerAdmin [email protected] DocumentRoot /var/www/virtual/example.org/htdocs/current/public ServerName example.org ServerAlias www.example.org example.org *.example.org vu2025.admin.roughneck-media.de Alias /errors /var/www/virtual/example.org/errors/ RedirectMatch permanent ^/ftp[\/]?$ http://admin.roughneck-media.de/ftp/ RedirectMatch permanent ^/pma[\/]?$ http://admin.roughneck-media.de/pma/ RedirectMatch permanent ^/webmail[\/]?$ http://admin.roughneck-media.de/webmail/ RedirectMatch permanent ^/ispcp[\/]?$ http://admin.roughneck-media.de/ ErrorDocument 401 /errors/401.html ErrorDocument 403 /errors/403.html ErrorDocument 404 /errors/404.html ErrorDocument 500 /errors/500.html ErrorDocument 503 /errors/503.html <IfModule mod_cband.c> CBandUser example.org </IfModule> # httpd awstats support BEGIN. # httpd awstats support END. # httpd dmn entry cgi support BEGIN. ScriptAlias /cgi-bin/ /var/www/virtual/example.org/cgi-bin/ <Directory /var/www/virtual/example.org/cgi-bin> AllowOverride AuthConfig #Options ExecCGI Order allow,deny Allow from all </Directory> # httpd dmn entry cgi support END. <Directory /var/www/virtual/example.org/htdocs/current/public> # httpd dmn entry PHP support BEGIN. # httpd dmn entry PHP support END. Options -Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> # httpd dmn entry PHP2 support BEGIN. <IfModule mod_php5.c> php_admin_value open_basedir "/var/www/virtual/example.org/:/var/www/virtual/example.org/phptmp/:/usr/share/php/" php_admin_value upload_tmp_dir "/var/www/virtual/example.org/phptmp/" php_admin_value session.save_path "/var/www/virtual/example.org/phptmp/" php_admin_value sendmail_path '/usr/sbin/sendmail -f vu2025 -t -i' </IfModule> <IfModule mod_fastcgi.c> ScriptAlias /php5/ /var/www/fcgi/example.org/ <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI -MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> <IfModule mod_fcgid.c> Include /etc/apache2/mods-available/fcgid_ispcp.conf <Directory /var/www/virtual/example.org/htdocs> FCGIWrapper /var/www/fcgi/example.org/php5-fcgi-starter .php Options +ExecCGI </Directory> <Directory "/var/www/fcgi/example.org"> AllowOverride None Options +ExecCGI MultiViews -Indexes Order allow,deny Allow from all </Directory> </IfModule> # httpd dmn entry PHP2 support END. Include /etc/apache2/ispcp/example.org.conf RewriteEngine On # Make sure people go to www.myapp.com, not myapp.com RewriteCond %{HTTP_HOST} ^myapp\.com$ [NC] RewriteRule ^(.*)$ http://www.myapp.com$1 [R=301,L] # Yes, I've read no-www.com, but my site already has much Google-Fu on # www.blah.com. Feel free to comment this out. # Uncomment for rewrite debugging #RewriteLog logs/myapp_rewrite_log #RewriteLogLevel 9 # Check for maintenance file and redirect all requests RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f RewriteCond %{SCRIPT_FILENAME} !maintenance.html RewriteRule ^.*$ /system/maintenance.html [L] # Rewrite index to check for static RewriteRule ^/$ /index.html [QSA] # Rewrite to check for Rails cached page RewriteRule ^([^.]+)$ $1.html [QSA] # Redirect all non-static requests to cluster RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://mongrel_cluster%{REQUEST_URI} [P,QSA,L] # Deflate AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \\bMSIE !no-gzip !gzip-only-text/html # Uncomment for deflate debugging #DeflateFilterNote Input input_info #DeflateFilterNote Output output_info #DeflateFilterNote Ratio ratio_info #LogFormat '"%r" %{output_info}n/%{input_info}n (%{ratio_info}n%%)' deflate #CustomLog logs/myapp_deflate_log deflate </VirtualHost> # httpd [example.org] dmn entry END. Does anyone know what could be wrong with it?

    Read the article

  • MDB listening a Topic in JBoss 5.1

    - by fool
    Hi, I have a topic configured correctly like this, in jboss 5.1: <mbean code="org.jboss.jms.server.destination.TopicService" name="jboss.messaging.destination:service=Topic,name=GreetingsTopic" xmbean-dd="xmdesc/Topic-xmbean.xml"> <depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer </depends> <depends>jboss.messaging:service=PostOffice</depends> </mbean> I can see the topic being started in JBoss logs: 23:08:40,990 INFO [TopicService] Topic[/topic/GreetingsTopic] started, fullSize=200000, pageSize=2000, downCacheSize=2000 And I have this MDB: @MessageDriven(activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jmx.Topic"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "topic/GreetingsTopic") }) public class GreetingsClientWebMDB implements MessageListener { ... ... } When I deploy the MDB, I'm getting a strange error: 3:09:30,781 ERROR [JmsActivation] Unable to reconnect org.jboss.resource.adapter.jms.inflow.JmsActivationSpec@1a1d638(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter@1da5b5b destination=topic/GreetingsTopic destinationType=javax.jmx.Topic tx=true durable=false reconnect=10 provider=java:/DefaultJMSProvider user=null maxMessages=1 minSession=1 maxSession=15 keepAlive=60000 useDLQ=true DLQHandler=org.jboss.resource.adapter.jms.inflow.dlq.GenericDLQHandler DLQJndiName=queue/DLQ DLQUser=null DLQMaxResent=5) java.lang.ClassCastException: Object at 'topic/GreetingsTopic' in context {java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory, java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces:org.jboss.naming:org.jnp.interfaces} is not an instance of [class=javax.jms.Queue classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml} interfaces={interface=javax.jms.Destination classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml}}] object class is [class=org.jboss.jms.destination.JBossTopic classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml} interfaces={interface=javax.jms.Topic classloader=BaseClassLoader@91e143{vfsfile:/usr/local/jboss-5.1.0.GA/server/default/conf/jboss-service.xml}}] at org.jboss.util.naming.Util.checkObject(Util.java:338) at org.jboss.util.naming.Util.lookup(Util.java:223) at org.jboss.resource.adapter.jms.inflow.JmsActivation.setupDestination(JmsActivation.java:464) at org.jboss.resource.adapter.jms.inflow.JmsActivation.setup(JmsActivation.java:352) at org.jboss.resource.adapter.jms.inflow.JmsActivation.handleFailure(JmsActivation.java:292) at org.jboss.resource.adapter.jms.inflow.JmsActivation$SetupActivation.run(JmsActivation.java:733) at org.jboss.resource.work.WorkWrapper.execute(WorkWrapper.java:205) at org.jboss.util.threadpool.BasicTaskWrapper.run(BasicTaskWrapper.java:260) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) I also try to use a deployment descriptor instead of annotations but with the same result. I really can't see the problem :(

    Read the article

  • Custom DataAnnotation attribute with datastore access in ASP.NET MVC 2

    - by mare
    I have my application designed with Repository pattern implemented and my code prepared for optional dependency injection in future, if we need to support another datastore. I want to create a custom validation attribute for my content objects. This attribute should perform some kind of datastore lookup. For instance, I need my content to have unique slugs. To check if a Slug already exist, I want to use custom DataAnnotation attribute in my Base content object (instead of manually checking if a slug exists each time in my controller's Insert actions). Attribute logic would do the validation. So far I have come up with this: public class UniqueSlugAttribute : ValidationAttribute { private readonly IContentRepository _repository; public UniqueSlugAttribute(ContentType contentType) { _repository = new XmlContentRepository(contentType); } public override bool IsValid(object value) { if (string.IsNullOrWhiteSpace(value.ToString())) { return false; } string slug = value.ToString(); if(_repository.IsUniqueSlug(slug)) return true; return false; } } part of my Base content class: ... [DataMember] public ContentType ContentType1 { get; set; } [DataMember] [Required(ErrorMessageResourceType = typeof (Localize), ErrorMessageResourceName = "Validation_SlugIsBlank")] [UniqueSlug(ContentType1)] public string Slug { get { return _slug; } set { if (!string.IsNullOrEmpty(value)) _slug = Utility.RemoveIllegalCharacters(value); } } ... There's an error in line [UniqueSlug(ContentType1)] saying: "An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type." Let me explain that I need to provide the ContentType1 parameter to the Constructor of UniqueSlug class because I use it in my data provider. It is actually the same error that appears if you try do to this on the built-in Required attribute: [Required(ErrorMessageResourceType = typeof (Localize), ErrorMessageResourceName = Resources.Localize.SlugRequired] It does not allow us to set it to dynamic content. In the first case ContentType1 gets known at runtime, in the second case the Resources.Localize.SlugRequired also gets known at runtime (because the Culture settings are assigned at runtime). This is really annoying and makes so many things and implementation scenarios impossible. So, my first question is, how to get rid of this error? The second question I have, is whether you think that I should redesign my validation code in any way?

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • C# 4.0 'dynamic' and foreach statement

    - by ControlFlow
    Not long time before I've discovered, that new dynamic keyword doesn't work well with the C#'s foreach statement: using System; sealed class Foo { public struct FooEnumerator { int value; public bool MoveNext() { return true; } public int Current { get { return value++; } } } public FooEnumerator GetEnumerator() { return new FooEnumerator(); } static void Main() { foreach (int x in new Foo()) { Console.WriteLine(x); if (x >= 100) break; } foreach (int x in (dynamic)new Foo()) { // :) Console.WriteLine(x); if (x >= 100) break; } } } I've expected that iterating over the dynamic variable should work completely as if the type of collection variable is known at compile time. I've discovered that the second loop actually is looked like this when is compiled: foreach (object x in (IEnumerable) /* dynamic cast */ (object) new Foo()) { ... } and every access to the x variable results with the dynamic lookup/cast so C# ignores that I've specify the correct x's type in the foreach statement - that was a bit surprising for me... And also, C# compiler completely ignores that collection from dynamically typed variable may implements IEnumerable<T> interface! The full foreach statement behavior is described in the C# 4.0 specification 8.8.4 The foreach statement article. But... It's perfectly possible to implement the same behavior at runtime! It's possible to add an extra CSharpBinderFlags.ForEachCast flag, correct the emmited code to looks like: foreach (int x in (IEnumerable<int>) /* dynamic cast with the CSharpBinderFlags.ForEachCast flag */ (object) new Foo()) { ... } And add some extra logic to CSharpConvertBinder: Wrap IEnumerable collections and IEnumerator's to IEnumerable<T>/IEnumerator<T>. Wrap collections doesn't implementing Ienumerable<T>/IEnumerator<T> to implement this interfaces. So today foreach statement iterates over dynamic completely different from iterating over statically known collection variable and completely ignores the type information, specified by user. All that results with the different iteration behavior (IEnumarble<T>-implementing collections is being iterated as only IEnumerable-implementing) and more than 150x slowdown when iterating over dynamic. Simple fix will results a much better performance: foreach (int x in (IEnumerable<int>) dynamicVariable) { But why I should write code like this? It's very nicely to see that sometimes C# 4.0 dynamic works completely the same if the type will be known at compile-time, but it's very sadly to see that dynamic works completely different where IT CAN works the same as statically typed code. So my question is: why foreach over dynamic works different from foreach over anything else?

    Read the article

  • Jini : single server with multiple clients

    - by user200340
    Hi all, I have a question about how to make multiple clients can access a single file located on server side and keep the file consistent. I have a simple PhoneBook server-client Jini program running at the moment, and server only provides some getter functions to clients, such as getName(String number), getNumber(String name) from a PhoneBook class(serializable), phonebook data are stored in a text file (phonebook.txt) at the moment. I have tried to implement some functions allowing to write a new records into the phonebook.txt file. If the writing record (name) is existing, an integer number will be added into the writing record. for example the existing phonebook.txt is .... John 01-01010101 .... if the writing record is "John 01-12345678",then "John_1 01-12345678" will be writen into phonebook.txt However, if i start with two clients A and B (on the same machine using localhost), and A tries to write "John 01-11111111", B tries to write "John 01-22222222". The early record will be overwritten later record. So, there must be something i did complete wrong. My client and server code are just like Jini HelloWorld example. My server side code is . 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. registrations are saved into a HashTable 4. for every discovered lookup service, i use registrar to register the ServiceItem, ServiceItem contains a null attributeSets, a null serviceId, and a service. The client code has: 1. LookupDiscovery with parameter new String[]{""}; 2. DiscoveryListener for LookupDiscovery 3. a ServiceTemplate with null attributeSets, a null serviceId and a type, the type is the interface class. 4. for each found ServiceRegistrar, if it can find the looking for ServiceTemplate, the returned Object is cast into the type of the interface class. I have tried to google more details, and i found JavaSpace could be the one i missed. But i am still not sure about it (i only start Jini for a very short time). So any help would be greatly appreciated.

    Read the article

  • JNDI / Classpath problem in glassfish

    - by Michael Borgwardt
    I am in the process of converting a large J2EE app from EJB 2 to EJB 3 (all stateless session beans, using glassfish 2.1.1), and running out of ideas. The first EJB I converted (let's call it Foo) ran without major problems (it was the only one in its ejb-module and I could completely replace the deployment descriptor with annotations) and the app ran fine. But after converting the second one (let's call it Bar, one of several in a different ejb-module) there is a weird combination of problems: The app deploys without errors (nothing in the logs either) There is an error when looking up Bar via JNDI When looking at the JNDI tree in the glassfish admin console, Bar is not present at all. Then when I look at the logs, I see this (Foo is the correct name of the EJB's remote interface of the first converted, previously working EJB): Caused by: javax.naming.NamingException: ejb ref resolution error for remote business interface Foo [Root exception is java.lang.ClassNotFoundException: Foo] at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425) at com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:414) at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:603) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more Caused by: java.lang.ClassNotFoundException: XXX at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1578) at com.sun.ejb.EJBUtils.getBusinessIntfClassLoader(EJBUtils.java:679) at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:348) ... 75 more This is followed by more exceptions for all the entries that (like Foo) do appear in the JNDI tree. These look like this: Caused by: javax.naming.NotContextException: BarHome cannot be listed at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:607) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more However, no exception for Bar, it does not appear in the log at all except one entry during deployment. The other EJBs in the same module do appear, as does Foo. Any ideas what could cause this or how to diagnose it further? The beans are pretty straightforward: @Stateless(name = "Foo") @RolesAllowed("FOOUSER") @TransactionAttribute(TransactionAttributeType.SUPPORTS) public class FooImpl extends BaseBean implements Foo { I'm also having some problems with the deployment descriptor for Bar (I'd like to eliminate it, but glassfish doesn't seem to like having a bean appear only in sun-ejb-jar.xml, or having some beans in a module declared in the descriptor and others use only annotations), but I can't see how that could cause the ClassNotFoundException on Foo. Is there a way to see the ClassPath that Glassfish is actually using?

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >