Search Results

Search found 8589 results on 344 pages for 'pre production'.

Page 70/344 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • B2B communication using IBM MQ

    - by Dheeraj Kumar M
    Oracle B2B 11g, provides the out-of-the box ability to connect to IBM MQ to exchange the message. This is support is provided via JMS offering of Oracle B2B. This is an addition to the stack of existing communication capabilities of B2B with trading partners. There are 2 ways of connecting to IBM MQ using B2B 1. Credential based connectivity 2. .bindings based connectivity As a pre-requisite to connect to IBM MQ, it is required to provide the following libraries in classpath: a. com.ibm.mqjms.jar b. dhbcore.jar c. com.ibm.mq.jar d. com.ibm.mq.jmqi.jar e. mqcontext.jar f. com.ibm.mq.pcf.jar g. com.ibm.mq.commonservices.jar h. com.ibm.mq.headers.jar i. fscontext.jar j. jms.jar Add the above jars into domain library directory and the directory usually located at $DOMAIN_DIR/lib. The jars located in this($DOMAIN_DIR/lib) directory will be picked up and added dynamically to the end of the server classpath at server startup. For eg. /user_projects/domains//lib/ Alternatively the above jar’s can also be added as part of the setDomainEnv.sh Credential based connectivity : Outbound: : Configure the trading partner delivery channel for using "Generic JMS" protocol Inbound: : Configure the internal delivery channel for using "Generic JMS" protocol with the following details: Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=<host>:<QM Listen port>/<MQ Channel Name>; User NameMQ User Name passwordMQ password .bindings based connectivity As a pre-requisite, get/generate the .bindings file in MQServer. This can be done by MQ Administrator Set the following values in the respective delivery channel for outbound / inbound Parameter NameDescription Destination NameMQ Queue Name Connection FactoryMQ Queue Manager Name Destination Providerjava.naming.factory.initial=com.ibm.mq.jms.context.WMQInitialContextFactory;java.naming.provider.url=file:///<location of .bindings file>;

    Read the article

  • Oracle Inroduces a New Line of Defense for Databases

    - by roxana.bradescu
    Today at the 2011 RSA Conference, we announced the immediate availability of our new Oracle Database Firewall, the latest addition to a comprehensive portfolio of database security solutions. Oracle Database Firewall is a network-based software solution that monitors database traffic, and can detect and block SQL injection and other attacks from reaching Oracle and non-Oracle databases. According to the 2010 Verizon Data Breach Investigations Report, SQL injection attacks against databases are responsible for 89% of all breached data. SQL injection attacks are a technique for controlling responses from the database server through applications. This attack exploits the inherent trust between application layer and the back-end database. Previously the only way organizations had to safeguard against SQL injection attacks was a complete overhaul of their application code. Obviously a very costly, complex, and often impossible undertaking for most organizations. Enter the new Oracle Database Firewall. It can help prevent SQL injection attacks by establishing a defensive perimeter around your databases. The Oracle Database Firewall uses an innovative SQL grammar analysis to inspect the database traffic against pre-defined policies. Normal expected traffic is allowed to pass (and can be optionally logged to demonstrate regulatory compliance), ensuring no false positives or disruption to your business. SQL statements that are explicitly forbidden or unknown SQL statements can either pass, be logged, alert, block or be substitute with pre-defined SQL statements. Being able to substitute an unknown potentially harmful SQL statement with a harmless statement is especially powerful since it foils an attack while allowing the application to operate normally and preventing DoS attacks. So, if you're at RSA, stop by our booth or attend the session with Steve Moyle, Oracle Database Firewall CTO. Or if you want to learn more immediately, please watch our on-demand webcast and download the new Oracle Database Firewall Resource Kit with everything you need to get started today.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #007

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Find Stored Procedure Related to Table in Database – Search in All Stored Procedure In 2006 I wrote a small script which will help user  find all the Stored Procedures (SP) which are related to one or more specific tables. This was quite a popular script however, in SQL Server 2012 the same can be achieved using new DMV sys.sql-expression_dependencies. I recently blogged about it over Find Referenced or Referencing Object in SQL Server using sys.sql_expression_dependencies. 2007 SQL SERVER – Versions, CodeNames, Year of Release 1993 – SQL Server 4.21 for Windows NT 1995 – SQL Server 6.0, codenamed SQL95 1996 – SQL Server 6.5, codenamed Hydra 1999 – SQL Server 7.0, codenamed Sphinx 1999 – SQL Server 7.0 OLAP, codenamed Plato 2000 – SQL Server 2000 32-bit, codenamed Shiloh (version 8.0) 2003 – SQL Server 2000 64-bit, codenamed Liberty 2005 – SQL Server 2005, codenamed Yukon (version 9.0) 2008 – SQL Server 2008, codenamed Katmai (version 10.0) 2011 – SQL Server 2008, codenamed Denali (version 11.0) Search String in Stored Procedure Searching sting in the stored procedure is one of the most frequent task developer do. They might be searching for a table, view or any other details. I have written a script to do the same in SQL Server 2000 and SQL Server 2005. This is worth bookmarking blog post. There is an alternative way to do the same as well here is the example. 2008 SQL SERVER – Refresh Database Using T-SQL NO! Some of the questions have a single answer NO! You may want to read the question in the original blog post. I had a great time saying No! SQL SERVER – Delete Backup History – Cleanup Backup History SQL Server stores history of all the taken backup forever. History of all the backup is stored in the msdb database. Many times older history is no more required. Following Stored Procedure can be executed with a parameter which takes days of history to keep. In the following example 30 is passed to keep a history of month. 2009 Stored Procedure are Compiled on First Run – SP taking Longer to Run First Time Is stored procedure pre-compiled? Why the Stored Procedure takes a long time to run for the first time?  This is a very common questions often discussed by developers and DBAs. There is an absolutely definite answer but the question has been discussed forever. There is a misconception that stored procedures are pre-compiled. They are not pre-compiled, but compiled only during the first run. For every subsequent runs, it is for sure pre-compiled. Read the entire article for example and demonstration. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes This is one of the most important performance tuning lesson on my blog. I suggest this weekend you spend time reading them and let me know what you think about the concepts which I have demonstrated in the four part series. Part 1 | Part 2 | Part 3 | Part 4 Seek Predicate is the operation that describes the b-tree portion of the Seek. Predicate is the operation that describes the additional filter using non-key columns. Based on the description, it is very clear that Seek Predicate is better than Predicate as it searches indexes whereas in Predicate, the search is on non-key columns – which implies that the search is on the data in page files itself. Policy Based Management – Create, Evaluate and Fix Policies This article will cover the most spectacular feature of SQL Server – Policy-based management and how the configuration of SQL Server with policy-based management architecture can make a powerful difference. Policy based management is loaded with several advantages. It can help you implement various policies for reliable configuration of the system. It also provides additional administration assistance to DBAs and helps them effortlessly manage various tasks of SQL Server across the enterprise. 2010 Recycle Error Log – Create New Log file without Server Restart Once I observed a DBA to restaring the SQL Server when he needed new error log file. This was funny and sad both at the same time. There is no need to restart the server to create a new log file or recycle the log file. You can run sp_cycle_errorlog and achieve the same result. Get Database Backup History for a Single Database Simple but effective script! Reducing CXPACKET Wait Stats for High Transactional Database The subject is very complex and I have done my best to simplify the concept. In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Information Related to DATETIME and DATETIME2 There are quite a lot of confusion with DATETIME and DATETIME2. DATETIME2 is also one of the underutilized datatype of SQL Server.  In this blog post I have written a follow up of the my earlier datetime series where I clarify a few of the concepts related to datetime. Difference Between GETDATE and SYSDATETIME Difference Between DATETIME and DATETIME2 – WITH GETDATE Difference Between DATETIME and DATETIME2 2011 Introduction to CUME_DIST – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function CUME_DIST(). This function provides cumulative distribution value. It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical functions FIRST_VALUE() and LAST_VALUE(). This function returns first and last value from the list. It will be very difficult to explain this in words so I’d like to attempt to explain its function through a brief example. Instead of creating a new table, I will be using the AdventureWorks sample database as most developers use that for experiment purposes. OVER clause with FIRST _VALUE and LAST_VALUE – Analytic Functions Introduced in SQL Server 2012 – ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING “Don’t you think there is bug in your first example where FIRST_VALUE is remain same but the LAST_VALUE is changing every line. I think the LAST_VALUE should be the highest value in the windows or set of result.” Puzzle – Functions FIRST_VALUE and LAST_VALUE with OVER clause and ORDER BY You can see that row number 2, 3, 4, and 5 has same SalesOrderID = 43667. The FIRST_VALUE is 78 and LAST_VALUE is 77. Now if these function was working on maximum and minimum value they should have given answer as 77 and 80 respectively instead of 78 and 77. Also the value of FIRST_VALUE is greater than LAST_VALUE 77. Why? Explain in detail. Introduction to LEAD and LAG – Analytic Functions Introduced in SQL Server 2012 SQL Server 2012 introduces new analytical function LEAD() and LAG(). This functions accesses data from a subsequent row (for lead) and previous row (for lag) in the same result set without the use of a self-join . It will be very difficult to explain this in words so I will attempt small example to explain you this function. Instead of creating new table, I will be using AdventureWorks sample database as most of the developer uses that for experiment. A Real Story of Book Getting ‘Out of Stock’ to A 25% Discount Story Available Our book was out of stock in 48 hours of it was arrived in stock! We got call from the online store with a request for more copies within 12 hours. But we had printed only as many as we had sent them. There were no extra copies. We finally talked to the printer to get more copies. However, due to festivals and holidays the copies could not be shipped to the online retailer for two days. We knew for sure that they were going to be out of the book for 48 hours. This is the story of how we overcame that situation! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for August 2, 2013

    - by OTN ArchBeat
    Podcast: Data Warehousing and Oracle Data Integrator - Part 2 Part to of the discussion about Data Warehousing and Oracle Data Integrator focuses on a discussion of how data warehousing is changing and the forces driving that change. Panelists for this discussion are Uli Bethke, Oracle ACE Director Cameron Lackpour, Oracle ACE Director (and guest producer) Gurcan Orhan, and Michael Rainey. Case Management In-Depth: Cases & Case Activities Part 1 – Acivity Scope | Mark Foster FMW solution architect Mark Foster kicks off a new series with a look at the decisions made on the scope of BPM process case activities. Video: Quick Intro to WebLogic Maven Plugin 12.1.2 | Mark Nelson This YouTube video by FMW solution architect Mark Nelson offers a quick introduction to the basics of installing and using the new Oracle WebLogic 12.1.2 Maven Plugin. Running the Managed Coherence Servers Example in WebLogic Server 12c | Tim Middleton FMW solution architect Tim Middleton shares the technical details on the new Managed Coherence Servers feature and outlines how you can run the sample application available with a WebLogic Server 12.1.2 install. What’s wrong with how we develop and deliver SOA Applications today? | Mark Nelson "When we arrive at the go-live day, we have a lot of fear and uncertainty," says solution architect Mark Nelson of the typical SOA practice. "We have no idea if the system is going to work in production. We have never tested it under a production-like load, and we have not really tested it for performance, longevity, etc." OTN Latin America Tour 2013 | Kai Yu Oracle ACE Director Kai Yu shares the session abstracts from his participation in the 2013 Oracle Technology Network Latin America conference tour, which made its way through OUG conferences in Ecuador, Guatemala, Panama, and Costa Rica. Webcast: Latest Security Innovations in Oracle Database 12c Oracle Database 12c includes more new security capabilities than any other release in Oracle history! In this webcast Roxana Bradescu (Director, Oracle Database Security Product Management) will discuss these capabilities and answer your questions. (Registration required.) Thought for the Day "The main goal in life career-wise should always be to try to get paid to simply be yourself." — Kevin Smith (Born August 2, 1970) Source: brainyquote.com

    Read the article

  • Unit and Integration testing: How can it become a reflex

    - by LordOfThePigs
    All the programmers in my team are familiar with unit testing and integration testing. We have all worked with it. We have all written tests with it. Some of us even have felt an improved sense of trust in his/her own code. However, for some reason, writing unit/integration tests has not become a reflex for any of the members of the team. None of us actually feel bad when not writing unit tests at the same time as the actual code. As a result, our codebase is mostly uncovered by unit tests, and projects enter production untested. The problem with that, of course is that once your projects are in production and are already working well, it is virtually impossible to obtain time and/or budget to add unit/integration testing. The members of my team and myself are already familiar with the value of unit testing (1, 2) but it doesn't seem to help bringing unit testing into our natural workflow. In my experience making unit tests and/or a target coverage mandatory just results in poor quality tests and slows down team members simply because there is no self-generated motivation to produce these tests. Also as soon as pressure eases, unit tests are not written any more. My question is the following: Is there any methods that you have experimented with that helps build a dynamic/momentum inside the team, leading to people naturally wanting to create and maintain those tests?

    Read the article

  • links for 2011-02-04

    - by Bob Rhubart
    Oracle WebCenter Suite - Giving Users a Modern Experience: Webcast Q&A (Oracle Enterprise 2.0 Blog) Kellsey Ruppel share a summary of the viewer Q&A from the recent Oracle WebCenter Suite webcast. (tags: oracle otn enterprise2.0 webcenter) Oracle Fusion Middleware Security: Oracle Access Manager 11g Academy: The Policy Model (Part 1) Brian Eidelman kicks off a series of posts covering Oracle Access Manager. (tags: oracle otn fusionmiddleware security) The Tom Kyte Blog: A short podcast... Oracle senior technical architect Tom Kyte shares information on a series of upcoming live, in-person events in which he will participate. (tags: oracle otn ioug) Oracle and AIIM - Putting Enterprise 2.0 to Work (Oracle Enterprise 2.0 Blog) Brian Dirking shares a recap of the recent online Enterprise 2.o presentation by Andy MacMillan (Oracle) and Doug Miles (AIIM). (tags: oracle otn enterprise2.0) Arun Gupta: WebLogic Developer/Production Web Profile, Full Java EE 6 Platform - Chat Transcript and Slides from OTN Virtual Developer Day Arun Gupta shares chat transcripts and more from the recent OTN Virtual Developer Day focused on WebLogic. . (tags: weblogic java) Andrejus Baranovskis's Blog: How to Install Oracle ECM 11g PS3 - Domain Configuration Hint Concise instructions from Oracle ACE Director Andrejus Baranovski. (tags: oracle otn oracleace enterprise2.0 weblogic) Oracle BI EE 11g & Oracle ADF - Part 1 - Understanding Security Integration Rittman Mead's Venkatakrishnan J explores "how much Oracle ADF or the Oracle Fusion Middleware has influenced most of the features in BI EE 11g." (tags: oracle oracleace businessintelligence obiee) Gone With the Wind: Where Have All the Composites Gone? SOA author Antony Reynolds solves a mystery. (tags: oracle otn soa) Playing with Oracle 11gR2, OEL 5.6 and VirtualBox 4.0.2 (1st Part) "This installation should never be used for Production or Development purposes. This installation was created for educational purpose only, and is extremely helpful to learn and understand how Oracle works if you do not have access to a traditional hardware resource." - Oracle ACE Director Francisco Munoz Alvarez (tags: oracle otn virtualbox virtualization)

    Read the article

  • Engineering as a Service

    - by jgelhaus
    Oracle Exadata Database Machine is known for great compute performance, and over the past few years, it has also become known as a great platform for any type of Oracle Database workload, from data warehousing to online transaction processing (OLTP). But now organizations are turning to Oracle Exadata for business efficiencies and private cloud solutions—for consolidation and database as a service (DBaaS). University of Minnesota For an inside look at how DBaaS is working in the real world, it’s worth checking into the University of Minnesota’s database hotel.  With more than 50,000 students, the University of Minnesota in Minneapolis is one of the largest universities in the United States. The university’s centralized IT group not only has to support all those students but also must provide support and services to more than 40 departments and colleges within the university. They have two Exadata Database Machine X2-2 half-rack systems from Oracle, with four database nodes each and roughly 30 terabytes of usable disk space for each of the Oracle Exadata systems. The university is using Oracle Real Application Clusters (Oracle RAC) for high availability and the Data Guard feature of Oracle Database, Enterprise Edition, for disaster recovery capabilities. The deployment has been live in production since May 2011. Overhead Door When it comes to overhead, revolving, sliding, or other specialty residential and commercial doors, Overhead Door is the worldwide leader. But when they needed to open doors with their customers through a better, faster, and more agile IT infrastructure, Overhead Door turned to Oracle and Oracle Exadata. Oracle Exadata Database Machine plays an important part in Overhead Door’s IT and business strategy. The organization has two Exadata Database Machine X2-2s deployed, one in production and one in development and testing Read the full Oracle Magazine article Engineering as a Service

    Read the article

  • i can't uninstall ubuntu software

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • Variant Management– Which Approach fits for my Product?

    - by C. Chadwick
    Jürgen Kunz – Director Product Development – Oracle ORACLE Deutschland B.V. & Co. KG Introduction In a difficult economic environment, it is important for companies to understand the customer requirements in detail and to address them in their products. Customer specific products, however, usually cause increased costs. Variant management helps to find the best combination of standard components and custom components which balances customer’s product requirements and product costs. Depending on the type of product, different approaches to variant management will be applied. For example the automotive product “car” or electronic/high-tech products like a “computer”, with a pre-defined set of options to be combined in the individual configuration (so called “Assembled to Order” products), require a different approach to products in heavy machinery, which are (at least partially) engineered in a customer specific way (so-called “Engineered-to Order” products). This article discusses different approaches to variant management. Starting with the simple Bill of Material (BOM), this article presents three different approaches to variant management, which are provided by Agile PLM. Single level BOM and Variant BOM The single level BOM is the basic form of the BOM. The product structure is defined using assemblies and single parts. A particular product is thus represented by a fixed product structure. As soon as you have to manage product variants, the single level BOM is no longer sufficient. A variant BOM will be needed to manage product variants. The variant BOM is sometimes referred to as 150% BOM, since a variant BOM contains more parts and assemblies than actually needed to assemble the (final) product – just 150% of the parts You can evolve the variant BOM from the single level BOM by replacing single nodes with a placeholder node. The placeholder in this case represents the possible variants of a part or assembly. Product structure nodes, which are part of any product, are so-called “Must-Have” parts. “Optional” parts can be omitted in the final product. Additional attributes allow limiting the quantity of parts/assemblies which can be assigned at a certain position in the Variant BOM. Figure 1 shows the variant BOM of Agile PLM. Figure 1 Variant BOM in Agile PLM During the instantiation of the Variant BOM, the placeholders get replaced by specific variants of the parts and assemblies. The selection of the desired or appropriate variants is either done step by step by the user or by applying pre-defined configuration rules. As a result of the instantiation, an independent BOM will be created (Figure 2). Figure 2 Instantiated BOM in Agile PLM This kind of Variant BOM  can be used for „Assembled –To-Order“ type products as well as for „Engineered-to-Order“-type products. In case of “Assembled –To-Order” type products, typically the instantiation is done automatically with pre-defined configuration rules. For „Engineered- to-Order“-type products at least part of the product is selected manually to make use of customized parts/assemblies, that have been engineered according to the specific custom requirements. Template BOM The Template BOM is used for „Engineered-to-Order“-type products. It is another type of variant BOM. The engineer works in a flexible environment which allows him to build the most creative solutions. At the same time the engineer shall be guided to re-use existing solutions and it shall be assured that product variants of the same product family share the same base structure. The template BOM defines the basic structure of products belonging to the same product family. Let’s take a gearbox as an example. The customer specific configuration of the gearbox is influenced by several parameters (e.g. rpm range, transmitted torque), which are defined in the customer’s requirement document.  Figure 3 shows part of a Template BOM (yellow) and its relation to the product family hierarchy (blue).  Figure 3 Template BOM Every component of the Template BOM has links to the variants that have been engineeried so far for the component (depending on the level in the Template BOM, they are product variants, Assembly Variant or single part variants). This library of solutions, the so-called solution space, can be used by the engineers to build new product variants. In the best case, the engineer selects an existing solution variant, such as the gearbox shown in figure 3. When the existing variants do not fulfill the specific requirements, a new variant will be engineered. This new variant must be compliant with the given Template BOM. If we look at the gearbox in figure 3  it must consist of a transmission housing, a Connecting Plate, a set of Gears and a Planetary transmission – pre-assumed that all components are must have components. The new variant will enhance the solution space and is automatically available for re-use in future variants. The result of the instantiation of the Template BOM is a stand-alone BOM which represents the customer specific product variant. Modular BOM The concept of the modular BOM was invented in the automotive industry. Passenger cars are so-called „Assembled-to-Order“-products. The customer first selects the specific equipment of the car (so-called specifications) – for instance engine, audio equipment, rims, color. Based on this information the required parts will be determined and the customer specific car will be assembled. Certain combinations of specification are not available for the customer, because they are not feasible from technical perspective (e.g. a convertible with sun roof) or because the combination will not be offered for marketing reasons (e.g. steel rims with a sports line car). The modular BOM (yellow structure in figure 4) is defined in the context of a specific product family (in the sample it is product family „Speedstar“). It is the same modular BOM for the different types of cars of the product family (e.g. sedan, station wagon). The assembly or single parts of the car (blue nodes in figure 4) are assigned at the leaf level of the modular BOM. The assignment of assembly and parts to the modular BOM is enriched with a configuration rule (purple elements in figure 4). The configuration rule defines the conditions to use a specific assembly or single part. The configuration rule is valid in the context of a type of car (green elements in figure 4). Color specific parts are assigned to the color independent parts via additional configuration rules (grey elements in figure 4). The configuration rules use Boolean operators to connect the specifications. Additional consistency rules (constraints) may be used to define invalid combinations of specification (so-called exclusions). Furthermore consistency rules may be used to add specifications to the set of specifications. For instance it is important that a car with diesel engine always is build using the high capacity battery.  Figure 4 Modular BOM The calculation of the car configuration consists of several steps. First the consistency rules (constraints) are applied. Resulting from that specification might be added automatically. The second step will determine the assemblies and single parts for the complete structure of the modular BOM, by evaluating the configuration rules in the context of the current type of car. The evaluation of the rules for one component in the modular BOM might result in several rules being fulfilled. In this case the most specific rule (typically the longest rule) will win. Thanks to this approach, it is possible to add a specific variant to the modular BOM without the need to change any other configuration rules.  As a result the whole set of configuration rules is easy to maintain. Finally the color specific assemblies respective parts will be determined and the configuration is completed. Figure 5 Calculated Car Configuration The result of the car configuration is shown in figure 5. It shows the list of assemblies respective single parts (blue components in figure 5), which are required to build the customer specific car. Summary There are different approaches to variant management. Three different approaches have been presented in this article. At the end of the day, it is the type of the product which decides about the best approach.  For „Assembled to Order“-type products it is very likely that you can define the configuration rules and calculate the product variant automatically. Products of type „Engineered-to-Order“ ,however, need to be engineered. Nevertheless in the majority of cases, part of the product structure can be generated automatically in a similar way to „Assembled to Order“-tape products.  That said it is important first to analyze the product portfolio, in order to define the best approach to variant management.

    Read the article

  • What's a good Game development platform for a platformer game with these characteristics?

    - by Joe
    Yes, I know, the best way to make an indie game is to learn to code. I've got some scripting experience, but I want to do worldbuilding with already-existing tools (and communities surrounding those tools), and I've been really impressed with games like An Untitled Story that were made with pre-packaged toolsets at their core, like Game Maker. :) So I'm planning to make my game using either Game Maker or something like it. The basic parameters of my planned game: -2D platformer. -Physics/speed akin to Sonic the Hedgehog. -Large, non-linear world, flowing as seamlessly as possible -- think Super Metroid, but without the forced screen transitions. The first two points have me leaning toward Game Maker -- Plenty of 2D platformers have been made with it, and there are serviceable, openly available Sonic-the-Hedgehog-style physics engines for it that could be adapted to my needs with minimal muss and fuss. But the third makes me antsy -- from what limited information I hear, Game Maker has problems with large levels/boards/screens/whateveryoucallthem, thus necessitating transitions between screens. I want to avoid that if at all possible -- it would, I believe, fundamentally alter the flow of the game. I understand that generally speaking, the more you have loaded into memory the more things are going to chug (especially for a one-size-fits-all game development platform that isn't a model of efficient coding), but I'm hoping there are systems that can un-load objects that are sufficiently far offscreen and thus better produce seamlessness. Any thoughts, people? :) The sooner I can get a basic pre-fab physics engine and world-building program up and running, the sooner I can start prototyping areas and generally tooling around. Should I be looking at Game Maker, or elsewhere? (My current plan is to more-or-less build the game prototype-style, then worry about art and sound at the very end once the damn thing is playable.)

    Read the article

  • Java EE 6: How to get module name and app name

    - by user12798506
    Java EE 6??????????????????????????????????????????????? ???????????????[1] ?????????????JNDI???????????????????"java:module/ModuleName"?????? ?????"java:app/AppName"???????? InitialContext ctx = new InitialContext(); String moduleName = ctx.lookup("java:module/ModuleName"); // ?????? String appName = ctx.lookup("java:app/AppName"); // ????????? ???@Resource?????????????????????????????????????????? @Resource(lookup="java:module/ModuleName") String moduleName; // ?????????????????? @Resource(lookup="java:app/AppName") String appName; // ????????????????????? ???EAR???????Web????????EJB??????????????????AppName???????????????? ?????????GlassFish V3 (3.1.2.2)?WebLogic 12c (12.1.1)?JBoss AS 7 (7.1.1)?????????? ????????????????AppName???ModuleName??????????????? ?????????Web Profile??????????????????(GlassFish?JBoss?????Web Profile ?????)?????Apache TomEE (1.5.0)????????ModuleName???"localhost/<Web?????? >"?????????????????????????????????????? Java EE 6??????????????????????????????????????????????? ??????????????????????????????????????????????????? ???????????????????????????????????????????????????? ???????? ???Apache Tomcat 7.0?Servlet 3.0??????????????????????????????·??? ???????JNDI???????????????????????? ??:?Tomcat????????Java EE?????????????????????? [1] Java EE 6????(JSR 316)????????????? (pp.122-123): EE.5.15 Application Name and Module Name References A component may access the name of the current application using the pre-defined JNDI name java:app/AppName. A component may access the name of the current module using the pre-defined JNDI name java:module/ModuleName. Both of these names are represented by String objects.

    Read the article

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • SOA &amp; Application Grid Specialization&ndash; Education Implementation Assessment - Step 4 of 6

    - by Jürgen Kress
      In our first step to become SOA Specialized & Application Grid Specialized we highlighted the OMM system to register your opportunities. In our second step we featured marketing activities to create your reference cases and run joint marketing campaigns. In the third step we focused on the competence center assessments SOA Sales assessment & SOA Pre-Sales assessment & Support assessment / Application Grid Sales assessment & Application Grid Pre-Sales assessment & Support assessment In the forth step we will focus on the education implementation assessment criteria: · Oracle Application Grid Certified Implementation Specialist · Oracle Service-Oriented Architecture Certified Implementation Specialist Bootcamp training steps (optional): Login to Oracle Partner Network (support for login contact Partner Business Centers) Attend a SOA or Application Grid bootcamp to learn the product hands-on Find a training close to your location in the local training calendar Pearsonvue Steps: Go to http://www.pearsonvue.com/Oracle/ ·Create a web account. (will take up to 24 hours) if you need your OPN Company ID (please contact Partner Business Centers) ·Register and attend the Oracle Service-Oriented Architecture Certified Implementation Specialist (1Z1-451) or Oracle Application Grid Certified Implementation Specialist  (1Z1-523) at a training center close to you. The Application Grid Specialized is in beta phase, therefore we give away free vouchers; please contact Jürgen Kress if you like to get one. ·Submit your successful exam If you need to get an Oracle Partner Network Account please contact our Partner Business Centers. For more information on Specialization please visit our OPN Specialized Webcast Series and become a member in our SOA Partner Community for registration please visit www.oracle.com/goto/ema/soa Jürgen Kress, SOA Partner Adoption EMEA Thanks for your efforts to become Specialized! Technorati Tags: soa specialization

    Read the article

  • NDC Oslo

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2013/06/14/153136.aspx2013 has been a hectic year for conference presentations so far, NDC in Oslo has been the 6th conference I have attended, and my session there was my 11th conference presentation this year. I have been meaning to make the short trip over from Stockholm to NDC for a few years, and this was the first time I made it. I have heard a lot of great things about the event, and was impressed with the location, the sessions, and most of all the atmosphere around the event boots and during the party on Thursday evening. The session I was delivering was my “Grid Computing with 256 Windows Azure Worker Roles & Kinect” demo, which I have delivered at many events over the past 12 months. The demo went fine. I’m always a little nervous when I try to scale out the application to 256 worker roles, it almost always works well and the application will scale in minutes, but very occasionally there can be a longer delay due to the provisioning process in the Windows Azure data centers. This would not be an issue for many scenarios, but when standing on stage in front of a room full of developers you really want things to run smoothly. A number of people have suggested that I should pre-provision an environment so that it is guaranteed to be there when I run the demo during a session. For me the aim has always been to show the rapid scalability on cloud-based platforms live on stage. Pre-provisioning an environment may make for a more reliable demo but to me that would be cheating, and not half as much fun!

    Read the article

  • Release/Change management - best aproach

    - by Bob Rivers
    I asked this question an year ago in StackOverflow and never got a good answer. Since Programmers seems to be a better place to ask it, I'll give it a try... What is the better way to work with release management? More specifically what would be the best way to release packages? For example, assuming that you have a relatively stable system, a good quality assurance process (QA), etc. How do you prefer to release new versions? Let's assume that we are talking about a mid to large "centralized" web system (no clients), in-house development. This system can be considered "vital" for a corporate operations. I have a tendency to prefer to do this by releasing packets at regular intervals, not greater than 1 to 3 months. During this period, I will include into the package,fixes and improvements and make the implementation in production environment only once. But I've seen some people who prefer to place small changes in production, but with a greater frequency. The claim of these people is that by doing so, it is easier to identify bugs that have gone through the process of QA: in a package with 10 changes and another with only 1, it is much easier to know what caused the problem in the package with just one change... What is the opinion came from you?

    Read the article

  • Make the sound louder in Lubuntu

    - by Andrew
    I have a Toshiba r835 running Lubuntu 11.10. Turning the volume slider up all the way doesn't give very loud sound. I've tried typing alsamixer in a terminal and turning up all the levels there to maximum, but the speakers are still fairly quiet. Is there a simple way to increase maximum volume in software? I understand that there are physical limits to the sound the laptop's speakers can produce, but I suspect my maximum volume is limited by software. EDIT: This is exactly the type of solution I'm looking for. However, it doesn't work for me. What I did: sudo pico /etc/asound.conf This file does not exist, so I create a new one, containing: pcm.!default { type plug slave.pcm "softvol" } pcm.softvol { type softvol slave { pcm "dmix" } control { name "Pre-Amp" card 0 } min_dB -5.0 max_dB 20.0 resolution 6 } I reboot the machine, and type alsamixer. I use my left/right arrow keys to inspect the various volume options. I expect to see a new option, called Pre-Amp, but I don't see one. This fix seems to work for other people. Why doesn't this fix work for me?

    Read the article

  • How to force remove a package if dpkg removal script fails?

    - by fodon
    I'm trying to remove a package where I deleted the /etc/init.d/disco-master file (in an attempt to remove the package manually). I want to remove the disco-master package. How do I do this now? This is what happens when I do sudo apt-get remove disco-master: removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--remove): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master E: Sub-process /usr/bin/dpkg returned an error code (1) When I do sudo apt-get install --reinstall disco-master I get the following: You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.2+nmu1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I do sudo apt-get -f install I get this: Unpacking disco-node (from .../disco-node_0.4.2+nmu1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb (--unpack): trying to overwrite '/usr/lib/disco/master/ebin/disco.app', which is also in package disco-master 0.4.1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I run sudo apt-get remove disco-node I get the following: Package disco-node is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.1) but it is not going to be installed Depends: python-disco (= 0.4.1) but 0.4.2+nmu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I did sudo dpkg -P --force-all disco-master I got: Removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--purge): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master

    Read the article

  • Disabling shutdown command for all users, even root - consequences?

    - by Rich
    I would like to disable the shutdown command for all users, even root, on an Ubuntu Server installation. The reason I want to do this is to ensure that I don't get into the habit of shutting down the machine in this way, as I SSH into a lot of production machines at the same time as this one, and I don't want to accidentally shutdown one of the other machines by typing the command into the wrong window. The server I want do disable shutdown on only runs inside VirtualBox on my Windows desktop, and I only use it for local testing so it is not a problem if I can't shut it down from the command line. I have already mitigated the problem a bit by ensuring I have a different password on the VirtualBox image, but obviously if I am within the sudo 'window' on one of the production machines, I could still accidentally shut it down. My questions are: How do I disable the shutdown command? If I do disable the shutdown command, are there any consequences that I should be made aware of? Most specifically, will it disable support for ACPI shutdown that is the equivalent of pressing the power button on a physical machine? Could it affect other generic applications? For information, I just use this VirtualBox image for trying out shell scripts, running Tomcat and Java, and that kind of thing.

    Read the article

  • using Moniker.com's nameservers

    - by user7519
    I have a VPS with A2Hosting for which i need to upgrade the OS. However, they've changed their VPS packages and forced me to order a new one. I went with an "unmanaged" package and have only just realised that they do not provide any DNS service at all, not even nameservers. Support tells me that "since your domain is not hosted with us, but with Moniker, you would not be able to use these nameservers. Your domain registrar should have a set of default nameservers that you can use, then create a A record to point to" my IP address. Moniker does provide for using their nameservers but i'm confused about which "pre-defined zone configuration" to use. They are: Domain Parking Domain Parking with Email Forwarding URL and Email Forwarding URL Forwarding URL Forwarding & CoolHandle Email I just want to use their nameserver and then create A & MX records pointing to the VPS. What do they mean by forwarding? I get the feeling it's a service that i don't want. Or, is it that i need to have a pre-defined zone only temporarily, and THEN set the A & MX? Which of these should i choose.

    Read the article

  • JDK bug migration: bugs.sun.com now backed by JIRA

    - by darcy
    The JDK bug migration from a Sun legacy system to JIRA has reached another planned milestone: the data displayed on bugs.sun.com is now backed by JIRA rather than by the legacy system. Besides maintaining the URLs to old bugs, bugs filed since the migration to JIRA are now visible too. The basic information presented about a bug is the same as before, but reformatted and using JIRA terminology: Instead of a "category", a bug now has a "component / subcomponent" classification. As outlined previously, part of the migration effort was reclassifying bugs according to a new classification scheme; I'll write more about the new scheme in a subsequent blog post. Instead of a list of JDK versions a bug is "reported against," there is a list of "affected versions." The names of the JDK versions have largely been regularized; code names like "tiger" and "mantis" have been replaced by the release numbers like "5.0" and "1.4.2". Instead of "release fixed," there are now "Fixed Versions." The legacy system had many fields that could hold a sequence of text entries, including "Description," "Workaround", and "Evaluation." JIRA instead only has two analogous fields labeled as "Description" and a unified stream of "Comments." Nearly coincident with switching to JIRA, we also enabled an agent which automatically updates a JIRA issue in response to pushes into JDK-related Hg repositories. These comments include the changeset URL, the user making the push, and a time stamp. These comments are first added when a fix is pushed to a team integration repository and then added again when the fix is pushed into the master repository for a release. We're still in early days of production usage of JIRA for JDK bug tracking, but the transition to production went smoothly and over 1,000 new issues have already been filed. Many other facets of the migration are still in the works, including hosting new incidents filed at bugs.sun.com in a tailored incidents project in JIRA.

    Read the article

  • DPKG errors after upgrade to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, cleaning apt cache, manual install using dpkg, etc, i just cant get them to install. what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) It is also just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy.....

    Read the article

  • Google+ Platform Office Hours for June 13th, 2012

    Google+ Platform Office Hours for June 13th, 2012 Here are the show notes for this week's office hours. This week was devoted to your questions and our answers. We covered a wide breadth of topics. 0:43 - Introductions 2:54 - About Tabletop Forge's KickStarter - goo.gl 10:00 - Can I run multiple Hangout Apps at the same time? 12:28 - Is Google looking into adding more powerful Hangout moderation controls? 13:47 - How do you use Hangout Apps with Hangouts on Air? - +Fraser Cain's tips and tricks for Hangouts on Air: goo.gl 23:40 - I have an Android game. How do I port it to the Hangouts API? 27:57 - Pre-hangout Apps, Hangouts on Air pre-rolls, scheduling hangouts and other ways to help viewers find your Hangouts on Air 33:55 - How do I bookmark useful Google+ posts with Google+? 38:13 - Can you add a host ID field to the Hangouts API? When will the overlay garbage collection improve? 40:17 - Hand movement tracking as part of the Hangouts API Thanks to everyone who joined the hangout and asked questions on Google+! From: GoogleDevelopers Views: 698 18 ratings Time: 44:16 More in Science & Technology

    Read the article

  • SQL SERVER – Caption the Cartoon Contest – Last 2 Days

    - by pinaldave
    Developer’s life is very interesting, we often want to start my day early at a job so we can go home early. However, the day never comes as the life of the developer is always about working late hours. If the developer goes to the office early – there are good chances that his co-workers will come late. Additionally, I am confident that there will be always something urgent for developers or DBA to solve right at the time they are ready to go home. This is the life of the developers!  Here is the interesting story of a DBA who was about to go to the home. He had to take his girlfriend to a movie and dinner in 30 minutes. However, his manager asks him to fix the performance related issues with their production server. In normal case, he had only two choices a) Job or b) Girlfriend. Well, our super hero DBA decided to use efficient tools and improve the performance of the production server in merely 30 minutes. When he was done, his manager was absolutely surprised by his efficiency and accuracy of the work. He asked him following question - Here is the contest – you need to guess what was the answer of our Super Hero DBA. If you guess the answer correct you may win Star Wars R2-D2 Inflatable Remote Controlled device. Additionally, if you Download DB Optimizer before Dec 8, 2012 – you will be eligible for USD 25 Amazon Gift Card (there are total 10 such awards). Please do not leave comments in this thread – to participate in the contest – please leave a comment here in the original contest page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BI-Applications Special Price Promotion for Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    Partners should keep in mind the “Midsize Market” pricing promotion for BI-Applications solution packages, with reduced minimums applicable to Oracle's Business Intelligence Products, and a pre-approved 50% discount. ·       Partners additionally get their normal e-business reseller discount. This now makes it most attractive to offer the pre-built BI-Applications such as Manufacturing Analytics, Financial Analytics, Procurement and Spend Analytics, Project Analytics, and Human Resources Analytics, to both customers newly implementing Oracle ERP, and for the many existing Oracle ERP (eBusiness suite, Peoplesoft and JDE) customers. To answer any questions, and to get the partner document with further details of this offer, or to work with us on our local sales campaigns targeting existing ERP customers, please send your query to [email protected] or [email protected]: or discuss it with your local Oracle Sales or Channel representative for Applications to Midsize Enterprises.  This promotion is ONLY for End Customers whose organisations have an Annual Revenue (or Public Sector Budget) below $500 million, and who are based in Europe, the Middle East or Africa. For more information see the orginal article, “New fy13 BI-Applications Price Promotion for MIDSIZE CUSTOMERS”  and send your query to [email protected].

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >