Search Results

Search found 1190 results on 48 pages for 'catalog'.

Page 7/48 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Magento - use an alternate "price.phtml" (in addition to the existing one)

    - by sdek
    I am looking for a way to have an alternate template/catalog/product/price.phml used in one specific location, and to continue using the existing price.phtml file in all other locations. To explain further, I need to display the regular price, and then another special price right below it - but only on the product page (for the main product being displayed). This special price is not a price that can be calculated by the catalog price rules, so I wrote my own module to do the calculation. So, everywhere that I am displaying prices I want to display with the regular ol' template/catalog/product/price.phtml file... but for the product page (the main product - not the related, upsells, etc) I want to use my own custom template/catalog/product/price-custom.phtml template file. Can anybody help? Normally I just look in the layout xml files (for example catalog.xml) to find these types of things, but price.phtml is kinda special - it isn't that simple. And for the life of me I can't figure out if there is an easy way to swap it out conditionally on the page being viewed. I am aware that I can just update price.phtml to always print out this extra price, and then use css to hide the price everywhere, but I would rather not do that if possible. (Also you may want to know that I only have simple products.)

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • CompositionHost.Initialize() can not execute twice.

    - by Khoa
    I am currently try to integrate MEF and PRISM to work with each other. Everything is working fine so far. Now i would like to use MEF runtime module discovery (DeploymentCatalog) which will be used to download XAPs from server directory and then plug it into one of the Region inside my MAIN UI. I am using UnityBootStrapper and inside this class i also integrated MEF container. This sample application is based on Glenn Block (http://codebetter.com/glennblock/2010/01/03/mef-and-prism-exploration-mef-module-loading/). The following code is used to initialize CompositionContainer inside my Bootstrapper: // This is the host catalog which contains all parts of running assembly. var catalog = GetHostCatalog(); // Create MEF container which initial catalog var container = new CompositionContainer(catalog); // here we explicitly map a part to make it available on imports elsewhere, using // Unity to resolve the export so dependencies are resolved // We do this because region manager is third-party ... therefore, we need to // export explicitly because the implementation doesn't have its own [export] tag container.ComposeExportedValue<IRegionManager>(Container.Resolve<IRegionManager>()); container.ComposeExportedValue<IEventAggregator>(Container.Resolve<IEventAggregator>()); // Obtain CatalogService as a singleton // All dynamic modules will use this service to add its parts. Container.RegisterInstance<ICatalogService>(new CatalogService(catalog)); // Initialize the container CompositionHost.Initialize(container); Now i have another class called DeploymentCatalogService which is used to download the XAP from server. The current problem i am facing is that inside DeploymentCatalogService Initialize method, CompositionHost container is try to initialize its container with aggregateCatalog again. _aggregateCatalog = new AggregateCatalog(); _aggregateCatalog.Catalogs.Add(new DeploymentCatalog()); CompositionHost.Initialize(_aggregateCatalog); This causes an Exception which stated that The container has already been initialized. Is there a way to use the existing container and update it with the new aggregateCatalog? Hope this is not too confusing. Please be nice i am still new to MEF. Cheers,

    Read the article

  • Accomodating Multiple DLL Versions

    - by shadeseeker
    I have an application that uses a Microsoft DLL (Microsoft.ComponentStudio.ComponentPlatformImplementation.dll) which is used for OS deployment and accessing the catalog files. Version 6.0.0.0 is specific to the Windows Server 2008 catalog files. The newer version 6.1.0.0 is specific to Windows Server 2008 R2 catalog files. Attempting to access a catalog file with the incorrect version results in an exception. My application (VB.NET using VS2005) needs to be able to access either version of these catalogs - I'd be happy with two executables (one for each catalog version) but obviously I don't want to maintain two sets of source code for each. Specifying both sets of DLLs in the project reference is not possible as the DLL names are identical. I'd rather not have to manually add and remove the DLL references each time I want to a build. As far as I know the interfaces etc are effectively identical between the two. I've read a few articles here and elsewhere about bindingRedirect, Assembly.Load etc but none seem to be bearing fruit. Any guidance on the best path to follow would be greatly appreciated. Thanks.

    Read the article

  • C#: Storting Instance of Objects in (Hashtable)

    - by DaGambit
    Hi I tried filling a Hashtable in the following way: ResearchCourse resCourse= new ResearchCourse();//Class Instance resCourse.CID="RC1000"; resCourse.CName="Rocket Science"; TaughtCourse tauCourse= new TaughtCourse();//Class Instance tauCourse.CID="TC1000"; tauCourse.CName="Marketing"; Hashtable catalog = new Hashtable(); catalog.Add("1", "resCourse.CID"); catalog.Add("2", "tauCourse.CID"); foreach (DictionaryEntry de in catalog) { Console.WriteLine("{0}, {1}", de.Key, de.Value); } Output Result to Console was: 1, resCourse.CID 2, tauCourse.CID Expected Result to be: 1, RC1000 2, TC2000 What am I misunderstanding about Hashtables? What is an easy way for the Hashtable to store the class instance and its values?

    Read the article

  • Problem with index server talking to remote server names with dashes or dots in them

    - by Aim Kai
    Hi I am having a problem, accessing a remote index server catalog. The name of the server has - in it, so i put the index catalog name as: i.e num.num.num.num\name of catalog or an-example-server I get the following error when using an ole data connection to pull results from the index: "Format of the initialization string does not conform to specification starting at index 39" I tried putting single quotes and &qoute; with no luck - anyone have idea? PS. This Microsoft Index Server Question!

    Read the article

  • Is it wrong for a context (right click) menu be the only way a user can perform a certain task?

    - by Eric
    I'd like to know if it ever makes sense to provide some functionality in a piece of software that is only available to the user through a context (right click) menu. It seems that in most software I've worked with the right click menu is always used as a quick way to get to features that are otherwise available from other buttons or menus. Below is a screen shot of the UI I'm developing. The tree view on the right shows the user's library of catalogs. Users can create new catalogs, or add and remove existing catalogs to and from their library. Catalogs in their library can then be opened or closed, or set to read-only. The screen shot shows the context menu I've created for the browser. Some commands can be executed independently from any specific catalog (New, Add). Yet the other commands must be applied to a specifically selected catalog (Close, Open, Remove, ReadOnly, Refresh, Clean UP, Rename). Currently the "Catalog" menu at the top of the window looks identical to this context menu. Yet I think this may be confusing to the users as the tree view which shows the currently selected catalog may not always be visible. The user may have switched to the Search or Filters tab, or the left pane may be hidden entirely. However, I'm hesitant to change the UI so that the commands that depends on a specifically selected catalog are only available through the context menu.

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Programming MVC2 is out with code

    The sample code for my latest book Programming ASP.NET MVC (covers version 2 and 2010) is available via the book's catalog page at Microsoft Press site run by O'Reilly.  You click the Examples link here to get to it: http://oreilly.com/catalog/9780735627147/...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Hot off the press : Latest Release of Oracle Enterprise Manager 12c (R4)

    - by Pankaj
    Read more here about the PRESS RELEASE:  Oracle Delivers Latest Release of Oracle Enterprise Manager 12c Richer Service Catalog for Database and Middleware as a Service; Enhanced Database and Middleware Management Help Drive Enterprise-Scale Private Cloud Adoption In coming weeks  , i will be covering latest topics like : DbaaS Service Catalog incorporating High Availability and Disaster Recovery New Rapid Start kit Other new Features  Stay Tuned !

    Read the article

  • Is TortoiseSVN really this buggy?

    - by John Isaacks
    I have been using tortoise svn for a couple weeks now. I get errors very often. Almost everything I do creates an error. this is with repositories on the internet, locally on my machine or a machine on the network. So I started to keep track. Some examples are below. 12/31/2010 Can't move 'C:\Users\jisaacks\Desktop\my branch test.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\my branch test.svn\entries': The file or directory is corrupted and unreadable. 01/04/2011 Commit failed (details follow): Server sent unexpected return value (405 Method Not Allowed) in response to MKCOL request for '/svn/kranichs-svn/!svn/wrk/b316f15e-0869-4644-9c53-87aa0103506b/branches' 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\vendors.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Can't move 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\tmp\entries' to 'C:\Users\jisaacks\Desktop\DVD Catalog\cake\tests\test_app\views\layouts.svn\entries': The file or directory is corrupted and unreadable. 01/06/2011 Commit failed (details follow): attempt to write a readonly database attempt to write a readonly database That last one about the read only database happens every time I commit. Say if I am working on the head revision (7) in a working copy. I make a change and commit it. It gives me this error. But if I look at the log it tells me that there is now a revision 8 (the commit I just made) but I am still on revision 7. So I need to run update to be on the current revision that I just commited. I hope I explained that clearly. Anyways with all these errors I wonder.. Is TSVN just this unstable, does everyone have these issues. Or is it just me? If just me, what could I be doing wrong?

    Read the article

  • Full-text search locks up database - error 0x8001010e

    - by Stewart May
    Hi We have a full-text catalog that is populated via a job every 15 minutes like so: ALTER FULLTEXT INDEX ON [dbo].[WorkItemLongTexts] START INCREMENTAL POPULATION We have encountered a problem where the database containing this catalog locks up. There are a couple of scenarios, we either see the job execute and the process hang with with a wait type of UNKNOWN TOKEN, or we see another process hang with a wait type of MSSEARCH. Once this happens the job continues to run but informs us that the request to start a full-text index population is ignored because a population is currently active. Looking in the full text log files we see the following error each time these problems occur: 2010-04-21 08:15:00.76 spid21s The full-text catalog health monitor reported a failure for full-text catalog "XXXFullTextCatalog" (5) in database "YYY" (14). Reason code: 0. Error: 0x8001010e(The application called an interface that was marshalled for a different thread.). The system will restart any in-progress population from the previous checkpoint. If this message occurs frequently, consult SQL Server Books Online for troubleshooting assistance. This is an informational message only. No user action is required."'' The only solution is to restart the SQL Server service and then the full text service. This is now occuring on a daily basis now so any help would be appreciated.

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • Security in OBIEE 11g, Part 2

    - by Rob Reynolds
    Continuing the series on OBIEE 11g, our guest blogger this week is Pravin Janardanam. Here is Part 2 of his overview of Security in OBIEE 11g. OBIEE 11g Security Overview, Part 2 by Pravin Janardanam In my previous blog on Security, I discussed the OBIEE 11g changes regarding Authentication mechanism, RPD protection and encryption. This blog will include a discussion about OBIEE 11g Authorization and other Security aspects. Authorization: Authorization in 10g was achieved using a combination of Users, Groups and association of privileges and object permissions to users and Groups. Two keys changes to Authorization in OBIEE 11g are: Application Roles Policies / Permission Groups Application Roles are introduced in OBIEE 11g. An application role is specific to the application. They can be mapped to other application roles defined in the same application scope and also to enterprise users or groups, and they are used in authorization decisions. Application roles in 11g take the place of Groups in 10g within OBIEE application. In OBIEE 10g, any changes to corporate LDAP groups require a corresponding change to Groups and their permission assignment. In OBIEE 11g, Application roles provide insulation between permission definitions and corporate LDAP Groups. Permissions are defined at Application Role level and changes to LDAP groups just require a reassignment of the Group to the Application Roles. Permissions and privileges are assigned to Application Roles and users in OBIEE 11g compared to Groups and Users in 10g. The diagram below shows the relationship between users, groups and application roles. Note that the Groups shown in the diagram refer to LDAP Groups (WebLogic Groups by default) and not OBIEE application Groups. The following screenshot compares the permission windows from Admin tool in 10g vs 11g. Note that the Groups in the OBIEE 10g are replaced with Application Roles in OBIEE 11g. The same is applicable to OBIEE web catalog objects.    The default Application Roles available after OBIEE 11g installation are BIAdministrator, BISystem, BIConsumer and BIAuthor. Application policies are the authorization policies that an application relies upon for controlling access to its resources. An Application Role is defined by the Application Policy. The following screenshot shows the policies defined for BIAdministrator and BISystem Roles. Note that the permission for impersonation is granted to BISystem Role. In OBIEE 10g, the permission to manage repositories and Impersonation were assigned to “Administrators” group with no control to separate these permissions in the Administrators group. Hence user “Administrator” also had the permission to impersonate. In OBI11g, BIAdministrator does not have the permission to impersonate. This gives more flexibility to have multiple users perform different administrative functions. Application Roles, Policies, association of Policies to application roles and association of users and groups to application roles are managed using Fusion Middleware Enterprise Manager (FMW EM). They reside in the policy store, identified by the system-jazn-data.xml file. The screenshots below show where they are created and managed in FMW EM. The following screenshot shows the assignment of WebLogic Groups to Application Roles. The following screenshot shows the assignment of Permissions to Application Roles (Application Policies). Note: Object level permission association to Applications Roles resides in the RPD for repository objects. Permissions and Privilege for web catalog objects resides in the OBIEE Web Catalog. Wherever Groups were used in the web catalog and RPD has been replaced with Application roles in OBIEE 11g. Following are the tools used in OBIEE 11g Security Administration: ·       Users and Groups are managed in Oracle WebLogic Administration console (by default). If WebLogic is integrated with other LDAP products, then Users and Groups needs to managed using the interface provide by the respective LDAP vendor – New in OBIEE 11g ·       Application Roles and Application Policies are managed in Oracle Enterprise Manager - Fusion Middleware Control – New in OBIEE 11g ·       Repository object permissions are managed in OBIEE Administration tool – Same as 10g but the assignment is to Application Roles instead of Groups ·       Presentation Services Catalog Permissions and Privileges are managed in OBI Application administration page - Same as 10g but the assignment is to Application Roles instead of Groups Credential Store: Credential Store is a single consolidated service provider to store and manage the application credentials securely. The credential store contains credentials that either user supplied or system generated. Credential store in OBIEE 10g is file based and is managed using cryptotools utility. In 11g, Credential store can be managed directly from the FMW Enterprise Manager and is stored in cwallet.sso file. By default, the Credential Store stores password for deployed RPDs, BI Publisher data sources and BISystem user. In addition, Credential store can be LDAP based but only Oracle Internet Directory is supported right now. As you can see OBIEE security is integrated with Oracle Fusion Middleware security architecture. This provides a common security framework for all components of Business Intelligence and Fusion Middleware applications.

    Read the article

  • Puppet master/agent basic setup

    - by lewap
    I'm trying to setup a basic puppet agent/master use-case with an agent server and a master. I've setup two servers with puppet and puppet master respectively. After the following setup of both servers: puppet master --no-daemonize --verbose puppet agent --test puppet cert --list to get the list, puppet cert --sign to sign it. puppet agent --test I get the message: err: Could not retrieve catalog from remote server: hostname was not match with the server certificate warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: hostname was not match with the server certificate What do I need to do in order to get the agent/master to be able to talk to each other?

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Somebody knows why the sectors of the IBM floppy disk are named 1 to 8 (and not 0 to 7 )

    - by Olivier Briand
    I am now programming on a 8 bits Z80 computer with CP/M 2.2, (as a hobby) and the floppy disk format is IBM, 40 tracks, 8 sectors per track, 512 bytes per sector. free space is 154 Ko on each face of the disk. Why the sectors are indexed 1 to 8 (and not zero to seven, as usually is seen with computers)? The catalog of the floppy disk is on the track 1 (sector 1 to 4, 64 entries). I'm wondering is the catalog on track zero? Is the zero track reserved to included a system (as track 0 & 1 are reserved to the system on a CP/M floppy disk, and catalog is on track 2)?

    Read the article

  • How to copy child nodes to another xml document?

    - by Alex
    Below is my xml XML1 <?xml version="1.0" encoding="ISO-8859-1" ?> <CATALOG> <CD> <TITLE>1</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>Columbia</COMPANY> <PRICE>10.90</PRICE> <YEAR>1985</YEAR> </CD> <CD> <TITLE>2</TITLE> <ARTIST>Bonnie Tyler</ARTIST> <COUNTRY>UK</COUNTRY> <COMPANY>CBS Records</COMPANY> <PRICE>9.90</PRICE> <YEAR>1988</YEAR> </CD> </CATALOG> XML2 <?xml version="1.0" encoding="ISO-8859-1" ?> <CATALOG> <CD> <TITLE>3</TITLE> <ARTIST>Dolly Parton</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>RCA</COMPANY> <PRICE>9.90</PRICE> <YEAR>1982</YEAR> </CD> </CATALOG> i need output like this <?xml version="1.0" encoding="ISO-8859-1" ?> <CATALOG> <CD> <TITLE>1</TITLE> <ARTIST>Bob Dylan</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>Columbia</COMPANY> <PRICE>10.90</PRICE> <YEAR>1985</YEAR> </CD> <CD> <TITLE>2</TITLE> <ARTIST>Bonnie Tyler</ARTIST> <COUNTRY>UK</COUNTRY> <COMPANY>CBS Records</COMPANY> <PRICE>9.90</PRICE> <YEAR>1988</YEAR> </CD> <CD> <TITLE>3</TITLE> <ARTIST>Dolly Parton</ARTIST> <COUNTRY>USA</COUNTRY> <COMPANY>RCA</COMPANY> <PRICE>9.90</PRICE> <YEAR>1982</YEAR> </CD> </CATALOG> How i write this in classic asp ?

    Read the article

  • The Open Data Protocol

    - by Bobby Diaz
    Well, day 2 of the MIX10 conference did not disappoint.  The keynote speakers introduced the preview release of IE9, which looks really cool and quick, and Visual Studio 2010 RC that is scheduled to RTM on April 12th.  It seemed to have a lot of improvements aimed at making developers more productive.  Here are the current links to these two offerings: Internet Explorer 9 – Platform Preview Visual Studio 2010 and .NET 4 – Release Candidate While both of these were interesting, the demos that really blew me away today centered around the work being done with The Open Data Protocol, or OData for short!  OData is a recommended standard being pushed by Microsoft that uses a REST based interface to interact with various types of data in a uniform manner.  Data producers then provide the data to consumer in either ATOM or JSON formats as requested by the client application. The OData SDK contains client and server libraries for many of the popular languages in use today, including .NET, Java, PHP, Objective C and JavaScript, so you consume or even produce your own OData services.  More information can be found using the following links: OData.org How to navigate an OData compliant service Query Functions (WCF Data Services) Netflix has made available one of the first live OData services by exposing their entire movie catalog.  You can browse and query using URLs similar to the following: http://odata.netflix.com/ http://odata.netflix.com/Catalog/Genres('Horror')/CatalogTitles http://odata.netflix.com/Catalog/CatalogTitles?$filter=startswith(Title/Regular,%20'Star%20Wars')&$orderby=Title/Regular So now I just need to find an excuse reason to start using OData in a real project! Enjoy!

    Read the article

  • Source-control your BI Publisher reports

    - by Dmitry Nefedkin
    Version control systems (VCS) like Subversion, Git and the others has been widely adopted and became the must-have tool in any software development project. Source artifacts and checked out, modified, checked in, all the history of changes is tracked by the VCS.  But what if the development tool stores the source/configuration artifacts not in your laptop's hard drive, but in some shared repository instead? Well, we definitely need a way for export/import our artifacts from/to this repository.   Oracle BI Publisher report development approach is based on such a shared repository model (catalog), and starting from BI Publisher 11.1.1.5 Oracle ships Catalog Utility, which can be utilized to export/import the reports from the command line.  To start using the BI Publisher Catalog Utility you should: Go to the file system of the server where BI Publisher binaries has been installed and locate the following file: <MW_HOME>/Oracle_BI1/clients/bipublisher/BIPCatalogUtil.zip Copy the file to your local filesystem and unzip it. I will refer to this unzipped directory as <BIP_CLIENT_DIR> below If you do not want to pass server BI Publisher server URL, username and password during each invocation, modify the corresponding values inside <BIP_CLIENT_DIR>/config/xmlp-client-config.xml Open the terminal window and go to <BIP_CLIENT_DIR>/bin Make sure that the following environment variables are set: JAVA_HOME, ORACLE_HOME Now it's time to run the utility: if you are on Linux - just run BIPCatalogUtil.sh and pass the parameters according to the utility documentation if you are on MS Windows the bad news are that the command script for MS Windows is missing, and support.oracle.com note 1333726.1 says that a temporary solution is "create a .cmd file by setting up a classpath and copying the same commands from the .sh script". The good news are that I've created this script already,  please download the it from GitHub Hope you will find this utility useful during you day-by-day BI Publisher development. 

    Read the article

  • Service Catalogs for Database as a Service

    - by B R Clouse
    At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

    Read the article

  • Magento My Account Layout XML Problem

    - by Remy
    Hi there, I'm having issues getting the customer.xml layout file to work properly for the customer's "my account" pages. The navigation links and the previously ordered items that are usually on the left hand side of the page won't show up on the page, but if I change the reference name to "content" in the xml file, it shows up (except it's obviously then on the right hand side). I've checked the template it's referencing (2columns-left.phtml), and the getChildHtml('left') is there in the correct position. The block that's causing the problem: <customer_account> <!-- Mage_Customer --> <reference name="root"> <action method="setTemplate"><template>page/2columns-left.phtml</template></action> </reference> <reference name="left"> <action method="unsetChild"><name>catalog.navigation.all</name></action> <action method="unsetChild"><name>callout.sendcard</name></action> <action method="unsetChild"><name>callout.specialorder</name></action> <block type="customer/account_navigation" name="customer_account_navigation" before="-" template="customer/account/navigation.phtml"> <action method="addLink" translate="label" module="customer"><name>account</name><path>customer/account/</path><label>Account Dashboard</label></action> <action method="addLink" translate="label" module="customer"><name>account_edit</name><path>customer/account/edit/</path><label>Account Information</label></action> <action method="addLink" translate="label" module="customer"><name>address_book</name><path>customer/address/</path><label>Address Book</label></action> </block> <block type="sales/reorder_sidebar" name="sale.reorder.sidebar" as="reorder" template="sales/reorder/sidebar.phtml"/> <remove name="tags_popular"/> </reference> </customer_account> This was basically copied straight over from another one of our sites where this works 100%. I've tried everything I can think of (changing the name of the reference in both the template and the layout xml, for example) to no avail. The templates that the layout is referencing are obviously working because they do show up when put into the "content" area. This installation of magento is version 1.3.1.1. I appreciate any advice you have to give me... *Update: I tried changing the reference to "global_messages", and it doesn't show there either. It only seems to work in the "content" section.* Update 2: These are the results of using the "showLayout=page" query string on the page when used with Alan Storm's very handy debugging module (which you'll find in his answer below). <?xml version="1.0"?> <layout><block type="page/html" name="root" output="toHtml" template="page/3columns.phtml"> <block type="page/html_head" name="head" as="head"> <action method="addJs"> <script>prototype/prototype.js</script> </action> <action method="addJs"> <script>prototype/validation.js</script> </action> <action method="addJs"> <script>paypoint/validation.js</script> </action> <action method="addJs"> <script>scriptaculous/builder.js</script> </action> <action method="addJs"> <script>scriptaculous/effects.js</script> </action> <action method="addJs"> <script>scriptaculous/dragdrop.js</script> </action> <action method="addJs"> <script>scriptaculous/controls.js</script> </action> <action method="addJs"> <script>scriptaculous/slider.js</script> </action> <action method="addJs"> <script>varien/js.js</script> </action> <action method="addJs"> <script>varien/form.js</script> </action> <action method="addJs"> <script>varien/menu.js</script> </action> <action method="addJs"> <script>mage/translate.js</script> </action> <action method="addJs"> <script>mage/cookies.js</script> </action> <action method="addCss"> <stylesheet>css/reset.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/boxes.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/clears.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/menu.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/calendar-blue.css</stylesheet> </action> <action method="addCss"> <stylesheet>css/styles.css</stylesheet> </action> <action method="addItem"> <type>skin_css</type> <name>css/iestyles.css</name> <params/> <if>IE</if> </action> <action method="addItem"> <type>skin_css</type> <name>css/ie7.css</name> <params/> <if>IE 7</if> </action> <action method="addItem"> <type>skin_css</type> <name>css/ie7minus.css</name> <params/> <if>lt IE 7</if> </action> <action method="addItem"> <type>js</type> <name>lib/ds-sleight.js</name> <params/> <if>lt IE 7</if> </action> <action method="addItem"> <type>js</type> <name>varien/iehover-fix.js</name> <params/> <if>lt IE 7</if> </action> <action method="addCss"> <stylesheet>css/print.css</stylesheet> <params>media="print"</params> </action> </block> <block type="page/html_header" name="header" as="header"> <block type="page/template_links" name="top.links" as="topLinks"/> <block type="page/switch" name="store_language" as="store_language" template="page/switch/languages.phtml"/> <block type="core/template" name="top.nav" template="page/html/top.nav.phtml"/> </block> <block type="core/messages" name="global_messages" as="global_messages"/> <block type="core/messages" name="messages" as="messages"/> <block type="core/text_list" name="content" as="content"/> <block type="core/text_list" name="right" as="right"/> <block type="page/html_footer" name="footer" as="footer" template="page/html/footer.phtml"/> <block type="core/text_list" name="before_body_end" as="before_body_end"/> </block> <block type="core/profiler" output="toHtml"/> <reference name="top.links"> <action method="addLink" translate="label title" module="customer"> <label>My Account</label> <url helper="customer/getAccountUrl"/> <title>My Account</title> <prepare/> <urlParams/> <position>10</position> </action> </reference> <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> <reference name="top.menu"> <block type="catalog/navigation" name="catalog.topnav" template="catalog/navigation/top.phtml"/> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="catalog" ifconfig="catalog/seo/site_map"> <label>Site Map</label> <url helper="catalog/map/getCategoryUrl"/> <title>Site Map</title> </action> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="catalogsearch" ifconfig="catalog/seo/search_terms"> <label>Search Terms</label> <url helper="catalogsearch/getSearchTermUrl"/> <title>Search Terms</title> </action> <action method="addLink" translate="label title" module="catalogsearch"> <label>Advanced Search</label> <url helper="catalogsearch/getAdvancedSearchUrl"/> <title>Advanced Search</title> </action> </reference> <reference name="top.links"> <block type="checkout/links" name="checkout_cart_link"> <action method="addCartLink"/> <action method="addCheckoutLink"/> </block> </reference> <reference name="footer"> <block type="cms/block" name="cms_footer_links" before="footer_links"> <action method="setBlockId"> <block_id>footer_links</block_id> </action> </block> </reference> <reference name="left"> <block type="tag/popular" name="tags_popular" template="tag/popular.phtm" ignore="1"> <action method="setTemplate"> <template>tag/popular.phtml</template> </action> </block> </reference> <reference name="left"> </reference> <reference name="before_body_end"> <block type="googleanalytics/ga" name="google_analytics" as="google_analytics"/> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="contacts" ifconfig="contacts/contacts/enabled"> <label>Contact Us</label> <url>contact-us</url> <title>Contact Us</title> <prepare>true</prepare> </action> </reference> <reference name="footer_links"> <action method="addLink" translate="label title" module="rss" ifconfig="rss/config/active"> <label>RSS</label> <url>rss</url> <title>RSS testing</title> <prepare>true</prepare> <urlParams/> <position/> <li/> <a>class="link-feed"</a> </action> </reference> <reference name="wishlist_sidebar"> <action method="addPriceBlockType"> <type>bundle</type> <block>bundle/catalog_product_price</block> <template>bundle/catalog/product/price.phtml</template> </action> </reference> <reference name="cart_sidebar"> <action method="addItemRender"> <type>bundle</type> <block>bundle/checkout_cart_item_renderer</block> <template>checkout/cart/sidebar/default.phtml</template> </action> </reference> <reference name="root"> <action method="setTemplate"> <template>page/2columns-left.phtml</template> </action> </reference> <reference name="left"> <action method="unsetChild"> <name>catalog.navigation.all</name> </action> <action method="unsetChild"> <name>callout.sendcard</name> </action> <action method="unsetChild"> <name>callout.specialorder</name> </action> <block type="customer/account_navigation" name="customer_account_navigation" before="-" template="customer/account/navigation.phtml"> <action method="addLink" translate="label" module="customer"> <name>account</name> <path>customer/account/</path> <label>Account Dashboard</label> </action> <action method="addLink" translate="label" module="customer"> <name>account_edit</name> <path>customer/account/edit/</path> <label>Account Information</label> </action> <action method="addLink" translate="label" module="customer"> <name>address_book</name> <path>customer/address/</path> <label>Address Book</label> </action> </block> <block type="sales/reorder_sidebar" name="sale.reorder.sidebar" as="reorder" template="sales/reorder/sidebar.phtml"/> <remove name="tags_popular"/> </reference> <reference name="customer_account_navigation"> <action method="addLink" translate="label" module="sales"> <name>orders</name> <path>sales/order/history/</path> <label>My Orders</label> </action> </reference> <reference name="customer_account_navigation"> <action method="addLink" translate="label" module="tag"> <name>tags</name> <path>tag/customer/</path> <label>My Tags</label> </action> </reference> <reference name="customer_account_navigation"> <action method="addLink" translate="label" module="newsletter"> <name>newsletter</name> <path>newsletter/manage/</path> <label>Newsletter Subscriptions</label> </action> </reference> <reference name="cart_sidebar"> <action method="addItemRender"> <type>bundle</type> <block>bundle/checkout_cart_item_renderer</block> <template>checkout/cart/sidebar/default.phtml</template> </action> </reference> <update handle="customer_account"/> <reference name="content"> <block type="customer/account_dashboard" name="customer_account_dashboard" template="customer/account/dashboard.phtml"> <block type="customer/account_dashboard_hello" name="customer_account_dashboard_hello" as="hello" template="customer/account/dashboard/hello.phtml"/> <block type="core/template" name="customer_account_dashboard_top" as="top"/> <block type="customer/account_dashboard_info" name="customer_account_dashboard_info" as="info" template="customer/account/dashboard/info.phtml"/> <block type="customer/account_dashboard_newsletter" name="customer_account_dashboard_newsletter" as="newsletter" template="customer/account/dashboard/newsletter.phtml"/> <block type="customer/account_dashboard_address" name="customer_account_dashboard_address" as="address" template="customer/account/dashboard/address.phtml"/> <block type="core/template" name="customer_account_dashboard_info1" as="info1"/> <block type="core/template" name="customer_account_dashboard_info2" as="info2"/> </block> </reference> <reference name="right"> <action method="unsetChild"> <name>catalog_compare_sidebar</name> </action> </reference> <reference name="customer_account_dashboard"> <action method="unsetChild"> <name>top</name> </action> <block type="sales/order_recent" name="customer_account_dashboard_top" as="top" template="sales/order/recent.phtml"/> </reference> <reference name="right"> <action method="unsetChild"> <name>right.poll</name> </action> </reference> <reference name="customer_account_dashboard"> <action method="unsetChild"> <name>customer_account_dashboard_info2</name> </action> <block type="tag/customer_recent" name="customer_account_dashboard_info2" as="info2" template="tag/customer/recent.phtml"/> </reference> <reference name="right"> <action method="unsetChild"> <name>right.newsletter</name> </action> </reference> <reference name="top.links"> <action method="addLink" translate="label title" module="customer"> <label>Log Out</label> <url helper="customer/getLogoutUrl"/> <title>Log Out</title> <prepare/> <urlParams/> <position>100</position> </action> </reference></layout>

    Read the article

  • working on lists in python

    - by owca
    I'm trying to make a small modification to django lfs project, that will allow me to deactivate products with no stocks. Unfortunatelly I'm just beginning to learn python, so I have big trouble with its syntax. That's what I'm trying to do. I'm using method 'has_variants' which returns true if product has any. Then I'm building a list from variants for this product. Next for every product in this list (I've called it 'set') I check it's stock and set bool variable 'inactive' to true if product has no stocks and to false if there are any. Finally if 'inactive' is false I'm setting self.active to 0. Code fails in line with: set[] = s How to correct it ? def deactivate(self): """If there are no stocks, deactivate the product. Used in last step of checkout. """ if self.has_variants(): for s in self.variants.filter(active=True): set[] = s for var in set: if var.get_stock_amount() == 0: inactive = True else: inactive = False else: if self.get_stock_amount() == 0: inactive = True if inactive: self.active = False return 0 error log : Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/__i nit__.py", line 362, in execute_manager utility.execute() File "/home/purplecow/rails/purpledev/site-packages/django/core/management/__i nit__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/bas e.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/home/purplecow/rails/purpledev/site-packages/django/core/management/bas e.py", line 213, in execute translation.activate('en-us') File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/_ _init__.py", line 73, in activate return real_activate(language) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/_ _init__.py", line 43, in delayed_loader return g['real_%s' % caller](*args, **kwargs) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 205, in activate _active[currentThread()] = translation(language) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/home/purplecow/rails/purpledev/site-packages/django/utils/translation/t rans_real.py", line 180, in _fetch app = import_module(appname) File "/home/purplecow/rails/purpledev/site-packages/django/utils/importlib.py" , line 35, in import_module __import__(name) File "/home/purplecow/rails/purpledev/lfs/caching/__init__.py", line 1, in <mo dule> from listeners import * File "/home/purplecow/rails/purpledev/lfs/caching/listeners.py", line 10, in < module> from lfs.cart.models import Cart File "/home/purplecow/rails/purpledev/lfs/cart/models.py", line 8, in <module> from lfs.catalog.models import Product File "/home/purplecow/rails/purpledev/lfs/catalog/__init__.py", line 1, in <mo dule> from listeners import * File "/home/purplecow/rails/purpledev/lfs/catalog/listeners.py", line 5, in <m odule> from lfs.catalog.models import PropertyGroup File "/home/purplecow/rails/purpledev/lfs/catalog/models.py", line 589 set[] = s ^ SyntaxError: invalid syntax

    Read the article

  • In Netbeans, how do I avoid wsimport rebuilding web service clients every build?

    - by gustafc
    I'm on a project where we use NetBeans (6.8). We use several different web services, which we have added as web service references, and Netbeans auto-generates the Ant wsimport scripts for us. Very handy, with one drawback: The web service clients are recompiled every time ant is invoked. This slows down the build process considerably and has caused the number of sword-related injuries, maimings and deaths to skyrocket. Normally, I'd fix this by changing the wsimport element from <wsimport sourcedestdir="${build.generated.dir}/jax-wsCache/PonyService" destdir="${build.generated.dir}/jax-wsCache/PonyService" wsdl="${wsdl-PonyService}" catalog="catalog.xml" verbose="true"/> to <wsimport sourcedestdir="${build.generated.dir}/jax-wsCache/PonyService" destdir="${build.generated.dir}/jax-wsCache/PonyService" wsdl="${wsdl-PonyService}" catalog="catalog.xml" verbose="true"> <produces dir="${build.generated.dir}/jax-wsCache/PonyService" /> </wsimport> But I can't, 'cause this part of the Ant script is auto-generated. If I right-click the PonyService web service reference and select Edit Web Service Attributes ⇒ wsimport options, I can add attributes to the wsimport element, but not child elements. So: How do I add the produces child element to wsimport other than hacking the auto-generated Ant script? Or more generally: How do I make the NetBeans-generated wsimport not recompile the web service clients every time I build?

    Read the article

  • Quick MEF + SL4 question

    - by Tom Allen
    I'm working on an app in the Silverlight 4 RC and i'm taking the oppertunity to learn MEF for handling plugin controls. I've got it working in a pretty basic manor, but it's not exactly tidy and I know there is a better way of importing multiple xap's. Essentially, in the App.xaml of my host app, I've got the following telling MEF to load my xap's: AggregateCatalog catalog = new AggregateCatalog(); DeploymentCatalog c1 = new DeploymentCatalog(new Uri("TestPlugInA.xap", UriKind.Relative)); DeploymentCatalog c2 = new DeploymentCatalog(new Uri("TestPlugInB.xap", UriKind.Relative)); catalog.Catalogs.Add(c1); catalog.Catalogs.Add(c2); CompositionHost.Initialize(catalog); c1.DownloadAsync(); c2.DownloadAsync(); I'm sure I'm not using the AggregateCatalog fully here and I need to be able to load any xap's that might be in the directory, not just hardcoding Uri's obviously.... Also, in the MainPage.xaml.cs in the host I have the following collection which MEF puts the plugin's into: [ImportMany(AllowRecomposition = true)] public ObservableCollection<IPlugInApp> PlugIns { get; set; } Again, this works, but I'm pretty sure I'm using ImportMany incorrectly.... Finally, the MainPage.xaml.cs file implements IPartImportsSatisfiedNotification and I have the following for handling the plugin's once loaded: public void OnImportsSatisfied() { sp.Children.Clear(); foreach (IPlugInApp plugIn in PlugIns) { if (plugIn != null) sp.Children.Add(plugIn.GetUserControl()); } } This works, but it seems filthy that it runs n times (n being the number of xap's to load). I'm having to call sp.Children.Clear() as if I don't, when loading the 2 plugin's, my stack panel is populated as follows: TestPlugIn A TestPlugIn A TestPlugIn B I'm clearly missing something here. Can anyone point out what I should be doing? Thanks!

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >