Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 62/375 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • The case of the phantom ADF developer (and other yarns)

    - by Chris Muir
    A few years of ADF experience means I see common mistakes made by different developers, some I regularly make myself.  This post is designed to assist beginners to Oracle JDeveloper Application Development Framework (ADF) avoid a common ADF pitfall, the case of the phantom ADF developer [add Scooby-Doo music here]. ADF Business Components - triggers, default table values and instead of views. Oracle's JDeveloper tutorials help with the A-B-Cs of ADF development, typically built on the nice 'n safe demo schema provided by with the Oracle database such as the HR demo schema. However it's not too long until ADF beginners, having built up some confidence from learning with the tutorials and vanilla demo schemas, start building ADF Business Components based upon their own existing database schema objects.  This is where unexpected problems can sneak in. The crime Developers may encounter a surprising error at runtime when editing a record they just created or updated and committed to the database, based on their own existing tables, namely the error: JBO-25014: Another user has changed the row with primary key oracle.jbo.Key[x] ...where X is the primary key value of the row at hand.  In a production environment with multiple users this error may be legit, one of the other users has updated the row since you queried it.  Yet in a development environment this error is just plain confusing.  If developers are isolated in their own database, creating and editing records they know other users can't possibly be working with, or all the other developers have gone home for the day, how is this error possible? There are no other users?  It must be the phantom ADF developer! [insert dramatic music here] The following picture is what you'll see in the Business Component Browser, and you'll receive a similar error message via an ADF Faces page: A false conclusion What can possibly cause this issue if it isn't our phantom ADF developer?  Doesn't ADF BC implement record locking, locking database records when the row is modified in the ADF middle-tier by a user?  How can our phantom ADF developer even take out a lock if this is the case?  Maybe ADF has a bug, maybe ADF isn't implementing record locking at all?  Shouldn't we see the error "JBO-26030: Failed to lock the record, another user holds the lock" as we attempt to modify the record, why do we see JBO-25014? : Let's verify that ADF is in fact issuing the correct SQL LOCK-FOR-UPDATE statement to the database. First we need to verify ADF's locking strategy.  It is determined by the Application Module's jbo.locking.mode property.  The default (as of JDev 11.1.1.4.0 if memory serves me correct) and recommended value is optimistic, and the other valid value is pessimistic. Next we need a mechanism to check that ADF is issuing the LOCK statements to the database.  We could ask DBAs to monitor locks with OEM, but optimally we'd rather not involve overworked DBAs in this process, so instead we can use the ADF runtime setting –Djbo.debugoutput=console.  At runtime this options turns on instrumentation within the ADF BC layer, which among a lot of extra detail displayed in the log window, will show the actual SQL statement issued to the database, including the LOCK statement we're looking to confirm. Setting our locking mode to pessimistic, opening the Business Components Browser of a JSF page allowing us to edit a record, say the CHARGEABLE field within a BOOKINGS record where BOOKING_NO = 1206, upon editing the record see among others the following log entries: [421] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[422] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[423] Where binding param 1: 1206  As can be seen on line 422, in fact a LOCK-FOR-UPDATE is indeed issued to the database.  Later when we commit the record we see: [441] OracleSQLBuilder: SAVEPOINT 'BO_SP'[442] OracleSQLBuilder Executing, Lock 1 DML on: BOOKINGS (Update)[443] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[444] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[445] Update binding param 1: N[446] Where binding param 2: 1206[447] BookingsView1 notify COMMIT ... [448] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [449] EntityCache close prepared statement ....and as a result the changes are saved to the database, and the lock is released. Let's see what happens when we use the optimistic locking mode, this time to change the same BOOKINGS record CHARGEABLE column again.  As soon as we edit the record we see little activity in the logs, nothing to indicate any SQL statement, let alone a LOCK has been taken out on the row. However when we save our records by issuing a commit, the following is recorded in the logs: [509] OracleSQLBuilder: SAVEPOINT 'BO_SP'[510] OracleSQLBuilder Executing doEntitySelect on: BOOKINGS (true)[511] Built select: 'SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings'[512] Executing LOCK...SELECT BOOKING_NO, EVENT_NO, RESOURCE_CODE, CHARGEABLE, MADE_BY, QUANTITY, COST, STATUS, COMMENTS FROM BOOKINGS Bookings WHERE BOOKING_NO=:1 FOR UPDATE NOWAIT[513] Where binding param 1: 1205[514] OracleSQLBuilder Executing, Lock 2 DML on: BOOKINGS (Update)[515] UPDATE buf Bookings>#u SQLStmtBufLen: 210, actual=62[516] UPDATE BOOKINGS Bookings SET CHARGEABLE=:1 WHERE BOOKING_NO=:2[517] Update binding param 1: Y[518] Where binding param 2: 1205[519] BookingsView1 notify COMMIT ... [520] _LOCAL_VIEW_USAGE_model_Bookings_ResourceTypesView1 notify COMMIT ... [521] EntityCache close prepared statement Again even though we're seeing the midtier delay the LOCK statement until commit time, it is in fact occurring on line 412, and released as part of the commit issued on line 419.  Therefore with either optimistic or pessimistic locking a lock is indeed issued. Our conclusion at this point must be, unless there's the unlikely cause the LOCK statement is never really hitting the database, or the even less likely cause the database has a bug, then ADF does in fact take out a lock on the record before allowing the current user to update it.  So there's no way our phantom ADF developer could even modify the record if he tried without at least someone receiving a lock error. Hmm, we can only conclude the locking mode is a red herring and not the true cause of our problem.  Who is the phantom? At this point we'll need to conclude that the error message "JBO-25014: Another user has changed" is somehow legit, even though we don't understand yet what's causing it. This leads onto two further questions, how does ADF know another user has changed the row, and what's been changed anyway? To answer the first question, how does ADF know another user has changed the row, the Fusion Guide's section 4.10.11 How to Protect Against Losing Simultaneous Updated Data , that details the Entity Object Change-Indicator property, gives us the answer: At runtime the framework provides automatic "lost update" detection for entity objects to ensure that a user cannot unknowingly modify data that another user has updated and committed in the meantime. Typically, this check is performed by comparing the original values of each persistent entity attribute against the corresponding current column values in the database at the time the underlying row is locked. Before updating a row, the entity object verifies that the row to be updated is still consistent with the current state of the database.  The guide further suggests to make this solution more efficient: You can make the lost update detection more efficient by identifying any attributes of your entity whose values you know will be updated whenever the entity is modified. Typical candidates include a version number column or an updated date column in the row.....To detect whether the row has been modified since the user queried it in the most efficient way, select the Change Indicator option to compare only the change-indicator attribute values. We now know that ADF BC doesn't use the locking mechanism at all to protect the current user against updates, but rather it keeps a copy of the original record fetched, separate to the user changed version of the record, and it compares the original record against the one in the database when the lock is taken out.  If values don't match, be it the default compare-all-columns behaviour, or the more efficient Change Indicator mechanism, ADF BC will throw the JBO-25014 error. This leaves one last question.  Now we know the mechanism under which ADF identifies a changed row, what we don't know is what's changed and who changed it? The real culprit What's changed?  We know the record in the mid-tier has been changed by the user, however ADF doesn't use the changed record in the mid-tier to compare to the database record, but rather a copy of the original record before it was changed.  This leaves us to conclude the database record has changed, but how and by who? There are three potential causes: Database triggers The database trigger among other uses, can be configured to fire PLSQL code on a database table insert, update or delete.  In particular in an insert or update the trigger can override the value assigned to a particular column.  The trigger execution is actioned by the database on behalf of the user initiating the insert or update action. Why this causes the issue specific to our ADF use, is when we insert or update a record in the database via ADF, ADF keeps a copy of the record written to the database.  However the cached record is instantly out of date as the database triggers have modified the record that was actually written to the database.  Thus when we update the record we just inserted or updated for a second time to the database, ADF compares its original copy of the record to that in the database, and it detects the record has been changed – giving us JBO-25014. This is probably the most common cause of this problem. Default values A second reason this issue can occur is another database feature, default column values.  When creating a database table the schema designer can define default values for specific columns.  For example a CREATED_BY column could be set to SYSDATE, or a flag column to Y or N.  Default values are only used by the database when a user inserts a new record and the specific column is assigned NULL.  The database in this case will overwrite the column with the default value. As per the database trigger section, it then becomes apparent why ADF chokes on this feature, though it can only specifically occur in an insert-commit-update-commit scenario, not the update-commit-update-commit scenario. Instead of trigger views I must admit I haven't double checked this scenario but it seems plausible, that of the Oracle database's instead of trigger view (sometimes referred to as instead of views).  A view in the database is based on a query, and dependent on the queries complexity, may support insert, update and delete functionality to a limited degree.  In order to support fully insertable, updateable and deletable views, Oracle introduced the instead of view, that gives the view designer the ability to not only define the view query, but a set of programmatic PLSQL triggers where the developer can define their own logic for inserts, updates and deletes. While this provides the database programmer a very powerful feature, it can cause issues for our ADF application.  On inserting or updating a record in the instead of view, the record and it's data that goes in is not necessarily the data that comes out when ADF compares the records, as the view developer has the option to practically do anything with the incoming data, including throwing it away or pushing it to tables which aren't used by the view underlying query for fetching the data. Readers are at this point reminded that this article is specifically about how the JBO-25014 error occurs in the context of 1 developer on an isolated database.  The article is not considering how the error occurs in a production environment where there are multiple users who can cause this error in a legitimate fashion.  Assuming none of the above features are the cause of the problem, and optimistic locking is turned on (this error is not possible if pessimistic locking is the default mode *and* none of the previous causes are possible), JBO-25014 is quite feasible in a production ADF application if 2 users modify the same record. At this point under project timelines pressure, the obvious fix for developers is to drop both database triggers and default values from the underlying tables.  However we must be careful that these legacy constructs aren't used and assumed to be in place by other legacy systems.  Dropping the database triggers or default value that the existing Oracle Forms  applications assumes and requires to be in place could cause unexpected behaviour and bugs in the Forms application.  Proficient software engineers would recognize such a change may require a partial or full regression test of the existing legacy system, a potentially costly and timely exercise, not ideal. Solving the mystery once and for all Luckily ADF has built in functionality to deal with this issue, though it's not a surprise, as Oracle as the author of ADF also built the database, and are fully aware of the Oracle database's feature set.  At the Entity Object attribute level, the Refresh After Insert and Refresh After Update properties.  Simply selecting these instructs ADF BC after inserting or updating a record to the database, to expect the database to modify the said attributes, and read a copy of the changed attributes back into its cached mid-tier record.  Thus next time the developer modifies the current record, the comparison between the mid-tier record and the database record match, and JBO-25014: Another user has changed" is no longer an issue. [Post edit - as per the comment from Oracle's Steven Davelaar below, as he correctly points out the above solution will not work for instead-of-triggers views as it relies on SQL RETURNING clause which is incompatible with this type of view] Alternatively you can set the Change Indicator on one of the attributes.  This will work as long as the relating column for the attribute in the database itself isn't inadvertently updated.  In turn you're possibly just masking the issue rather than solving it, because if another developer turns the Change Indicator back on the original issue will return.

    Read the article

  • Oracle Solaris: Zones on Shared Storage

    - by Jeff Victor
    Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list. One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS. ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past. With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! What could I use, instead? [There he goes, foreshadowing again... -Ed.] Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system. Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices. Why Migrate? Why is the migration of virtual servers important? Some of the most common reasons are: Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance. Moving a workload to a larger system because the workload has outgrown its original system. If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot. You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system. Concepts For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool: contains the zone's Solaris content, i.e. the root file system does not contain any content not related to the zone can only be mounted by one Solaris instance at a time Method Overview Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!") Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.) Install the zone on one system, onto shared storage. Boot the zone. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.) Shutdown the zone. Detach the zone from the original system. Attach the zone to its new "home" system. Boot the zone. The zone can be used normally, and even migrated back, or to a different system. Details The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB". Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone. The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1." Assumptions In the example shown below, I make these assumptions. The guest that will own the zone at the beginning is named sysA. The guest that will own the zone after the first migration is named sysB. On sysA, the shared disk is named /dev/dsk/c7t2d0 On sysB, the shared disk is named /dev/dsk/c7t3d0 (Finally!) The Steps Step 1) Determine the name of the disk that will move back and forth between the systems. root@sysA:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c7t0d0 /pci@0,0/pci8086,2829@d/disk@0,0 1. c7t2d0 /pci@0,0/pci8086,2829@d/disk@2,0 Specify disk (enter its number): ^D Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated. root@sysA:~# format -e c7t2d0 selecting c7t2d0 [disk formatted] FORMAT MENU: ... format fdisk No fdisk table exists. The default partition for the disk is: a 100% "SOLARIS System" partition Type "y" to accept the default partition, otherwise type "n" to edit the partition table. n SELECT ONE OF THE FOLLOWING: ... Enter Selection: 1 ... G=EFI_SYS 0=Exit? f SELECT ONE... ... 6 format label ... Specify Label type[1]: 1 Ready to label disk, continue? y format quit root@sysA:~# ls /dev/dsk/c7t2d0 /dev/dsk/c7t2d0 Step 3) Configure zone1 on sysA. root@sysA:~# zonecfg -z zone1 Use 'create' to begin configuring a new zone. zonecfg:zone1 create create: Using system default template 'SYSdefault' zonecfg:zone1 set zonename=zone1 zonecfg:zone1 set zonepath=/zones/zone1 zonecfg:zone1 add rootzpool zonecfg:zone1:rootzpool add storage dev:dsk/c7t2d0 zonecfg:zone1:rootzpool end zonecfg:zone1 exit root@sysA:~# oot@sysA:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: ... rootzpool: storage: dev:dsk/c7t2d0 Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...) root@sysA:~# zoneadm -z zone1 install Created zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install Image: Preparing at /zones/zone1/root. AI Manifest: /tmp/manifest.xml.RXaycg SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: zone1 Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/support/ DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 183/183 33556/33556 222.2/222.2 2.8M/s PHASE ITEMS Installing new actions 46825/46825 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 1696.847 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install Step 5) Boot the Zone. root@sysA:~# zoneadm -z zone1 boot Step 6) Login to zone's console to complete the specification of system information. root@sysA:~# zlogin -C zone1 Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation. Step 7) Shutdown the zone so it can be "moved." root@sysA:~# zoneadm -z zone1 shutdown Step 8) Detach the zone so that the original global zone can't use it. root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 installed /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 484M 1.51G 23% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool Step 9) Review the result and shutdown sysA so that sysB can use the shared disk. root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# init 0 Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command. When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.) root@sysB:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysB:~# zonecfg -z zone1 info zonename: zone1 zonepath: /zones/zone1 brand: solaris autoboot: false bootargs: file-mac-profile: pool: limitpriv: scheduling-class: ip-type: exclusive hostid: fs-allowed: anet: linkname: net0 ... rootzpool: storage: dev:dsk/c7t3d0 Step 11) Attaching the zone automatically imports the zpool. root@sysB:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach root@sysB:~# zoneadm -z zone1 boot root@sysB:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back. root@zone1:~# ls /opt root@zone1:~# touch /opt/fileA root@zone1:~# ls -l /opt/fileA -rw-r--r-- 1 root root 0 Oct 22 14:47 /opt/fileA root@zone1:~# exit logout [Connection to zone 'zone1' pts/2 closed] root@sysB:~# zoneadm -z zone1 shutdown root@sysB:~# zoneadm -z zone1 detach Exported zone zpool: zone1_rpool root@sysB:~# init 0 Step 13) Back on sysA, check the status. Oracle Corporation SunOS 5.11 11.1 September 2012 root@sysA:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared - zone1 configured /zones/zone1 solaris excl root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - Step 14) Re-attach the zone back to sysA. root@sysA:~# zoneadm -z zone1 attach Imported zone zpool: zone1_rpool Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach Installing: Using existing zone boot environment Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach root@sysA:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 17.6G 11.2G 6.47G 63% 1.00x ONLINE - zone1_rpool 1.98G 491M 1.51G 24% 1.00x ONLINE - root@sysA:~# zoneadm -z zone1 boot root@sysA:~# zlogin zone1 [Connected to zone 'zone1' pts/2] Oracle Corporation SunOS 5.11 11.1 September 2012 root@zone1:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 1.98G 538M 1.46G 26% 1.00x ONLINE - Step 15) Check for the file created on sysB, earlier. root@zone1:~# ls -l /opt total 1 -rw-r--r-- 1 root root 0 Oct 22 14:47 fileA Next Steps Here is a brief list of some of the fun things you can try next. Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones! Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.) Conclusion Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

    Read the article

  • Unity.ResolutionFailedException - Resolution of the dependency failed

    - by Anibas
    I have the following code: public static IEngine CreateEngine() { UnityContainer container = Unity.LoadUnityContainer(DefaultStrategiesContainerName); IEnumerable<IStrategy> strategies = container.ResolveAll<IStrategy>(); ITraderProvider provider = container.Resolve<ITraderProvider>(); return new Engine(provider, new List<IStrategy>(strategies)); } and the config: <unity> <typeAliases> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="weakRef" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="Strategy" type="ADTrader.Core.Contracts.IStrategy, ADTrader.Core" /> <typeAlias alias="Trader" type="ADTrader.Core.Contracts.ITraderProvider, ADTrader.Core" /> </typeAliases> <containers> <container name="strategies"> <types> <type type="Strategy" mapTo="ADTrader.Strategies.ThreeTurningStrategy, ADTrader.Strategies" name="1" /> <type type="Trader" mapTo="ADTrader.MbTradingProvider.MBTradingProvider, ADTrader.MbTradingProvider" /> </types> </container> </containers></unity> I am getting the following exception: Microsoft.Practices.Unity.ResolutionFailedException: Resolution of the dependency failed, type = "ADTrader.Core.Contracts.ITraderProvider", name = "". Exception message is: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- Microsoft.Practices.ObjectBuilder2.BuildFailedException: The current build operation (build key Build Key[ADTrader.MbTradingProvider.MBTradingProvider, null]) failed: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. (Strategy type BuildPlanStrategy, index 3) --- System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. at MBTCOMLib.MbtComMgrClass.EnableSplash(Boolean bEnable) at ADTrader.MbTradingProvider.MBTradingProvider..ctor() at BuildUp_ADTrader.MbTradingProvider.MBTradingProvider(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) --- End of inner exception stack trace --- at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) --- End of inner exception stack trace --- at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) at Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) at Microsoft.Practices.Unity.UnityContainerBase.ResolveT at ADTrader.Engine.EngineFactory.CreateEngine() Any idea?

    Read the article

  • NATUPnP IStaticPortMappingCollection::Add returns HRESULT 0x80040214

    - by dauphic
    I'm attempting to use Microsoft's NATUPnP library to create a port mapping. Unfortunately, I'm unable to. My router supports UPnP, it is enabled, and I can create mappings with other (pre-built) applications. I can also read existing mappings. When I call the Add function, it fails and returns HRESULT 0x80040214 (which is undocumented). I have absolutely no idea what might be going on. IStaticPortMapping* newMapping = NULL; hr = portMappings->Add(27015, L"TCP", 27015, L"MYCOMPUTER", VARIANT_TRUE, L"TestMapping", &newMapping); You can see the reference for this function at http://msdn.microsoft.com/en-us/library/aa366148%28v=VS.85%29.aspx. The portMappings object is, of course, valid; I use it earlier in the code to enumerate over the existing mappings. If anyone has experience with this and might know what my problem is, I'd appreciate any help.

    Read the article

  • JSF 2 equivalent of tomahawk subform

    - by digitaljoel
    I have an existing JSF 1.2 app that was a portlet running on glassfish v2. I'm converting it to a webapp running on glassfish v3. The app uses tomahawk subforms in several areas. Tomahawk has not been update for JSF 2, which is what ships with glassfish v3. We would like to update our app to JSF 2. Is there a JSF 2 equivalent to tomahawk's subform? I know one option would be to change all of the subforms to be ajax enabled and use the execute attribute to specify which controls take part in the form submission, but would like to make this as straight across as I can without modifying existing functionality if I can. So, lacking a tomahawk subform substitute, is there any way to specify partial form submission for non ajax events?

    Read the article

  • How do I base a style on a Silverlight toolkit theme style

    - by Ian Oakes
    I've being trying to add a theme from the Silverlight toolkit to a project. In the project there are a number of existing styles used in the layout. The problem is when any control has an explict style applied to it does not receive any attributes of the style from the theme. In WPF I would use something like BasedOn={x:Type TextBox}, but this is not supported in Silverlight. I've considered going through the theme and setting a key for every style and then using BasedOn to create both an implicit style to use with the ImplictStyleManager, as well as another explicit style for use with the existing styled controls. Have you got any better ideas?

    Read the article

  • Overwriting the content from one MOSS content database to another

    - by 78lro
    We have a content database on our live moss server. It contains one site collection with several sub-sites. I'm using the stsadm export command to produce a cmp file, then moving this to our test server in a different farm. I then want to import this content into the content database on our test farm, using the import stsadm command results in me being left with all the existing test data as well as the live data. I tried detaching the existing content database from test in central admin and creating a new empty one,to the then run the import against that but the import failed as obviously there's not root site in the empty db. The aim is to have the data on test look like live, clearing out all the test data. Can anyone suggest a good approach to this type of problem?

    Read the article

  • .net web service: Can't add service reference, only web reference

    - by ScottE
    I have an existing project that consumes web services. One was added as a service reference, and the other as a web reference. I don't recall why one was added as a web reference, but perhaps it's because I couldn't get it to work! The existing service reference for the one web service works fine, so it's not a .net version issue. I can successfully create a service reference for the second web service, but none of the methods are available. The .wsdl shows the schema, but the Reference.vb shows only the Namespace, and none of the methods. To clarify, these are two different 3rd party web service providers. We'd like to move to the service reference so we have more control over the configuration as we're having various issues with timeouts. Anyone come across this before? Edit Does it matter that there are two services at the address?

    Read the article

  • SyncFramework upgrade from 1.0 to 2.0 Sql Server CE database change tracking issue

    - by Andronicus
    I'm trying to upgrade an application that uses Sync Framework 1.0 to synchronise a SqlServerCe database with SqlServer 2005. On the client, the existing database already has change tracking enabled, but when the sync is initiated SyncFramework 2.0 fails to find the last Sync Received anchor and then tries to re=initialize the Change tracking, which fails. I get the exception... {System.Exception} = {"The specified change tracking operation is not supported. To carry out this operation on the table, disable the change tracking on the table, and enable the change tracking."} It seems like all I can do is delete the local database and recreate it. Which is not a great solution for us, since some of the data in the clients database is not synced with the server, and our users would prefer not to loose this data in the upgrade. Is there any reason why SyncFramework 2.0 cannot locate the existing Last received sync anchor?

    Read the article

  • Moving from WCF RIA Beta to RC: best practices?

    - by Duncan Bayne
    I have an existing WCF RIA project built on the Release Candidate; I'm now moving to the Release version & have discovered many changes. David Scruggs made the following comment on his (MSDN) blog: "If you’ve written anything in SIlverlight 4 RIA Services, you’ll need to rewrite it. There has been a lot of refactoring and namespace moves." Having made a brief attempt to compile the old solution with the new RIA framework I'm inclined to agree. My current plan is to: remove the Silverlight Business Application projects from the Solution rebuild the EF4 items from the database create a new Silverlight Business Application project re-add the files (XAML, CS) from the old Silverlight Business Application project Does this sound like a reasonable approach? I think it's cleaner than trying to manually alter the existing project.

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • Pushing a ABUnknownPersonViewController onto a navigation controller results in having no Navigation

    - by dermdaly
    Hi There, I'm working with the Address Book UI API on iPhone SDK 3.0. I want to present to the user the ability to create a new user, or add to an existing one, so I am using the ABUnknownPersonViewController. I have an existing navigation stack (with only 2 other views on it). Trouble is when I push the ABUnknownPersonViewController onto it, it shows up animated, etc. But there is no navigation bar, so no way to cancel. My code snippet is as follows newPersonViewController = [[ABUnknownPersonViewController alloc] init]; newPersonViewController.unknownPersonViewDelegate = self; newPersonViewController.displayedPerson = person; newPersonViewController.allowsAddingToAddressBook = YES; newPersonViewController.allowsActions = NO; [[self navigationController] pushViewController:newPersonViewController animated:YES]; Note: the current view controller does have a title, so that's not the issue. Any ideas what I am missing?

    Read the article

  • Rails migration to add boolean column to Postgres on Heroku

    - by pmc255
    I'm trying to execute a simple Rails migration to add a boolean column to an existing table. Here's the add_column call: add_column :users, :soliciting, :boolean, :null => false, :default => false However, after the migration runs (successfully, with no errors), I don't see the new column. If I go into the console and list the columns on the User table, for example, with this command: >> User.columns.each { |c| puts "#{c.name} : #{c.type}" } All the other columns show up, but not the one I just added with the migration. What's even more strange is that looking up a random user object yields the Postgres version of booleans (Ruby strings) >> User.find(1).soliciting => "t" However, the existing boolean columns all show up with standard Ruby boolean values of true and false. What's going on here? Is the migration actually complete? Why doesn't the column show up, yet is accessible in the model objects?

    Read the article

  • jquery - establishing truths when loading inline javascript via AJAX

    - by yaya3
    I have thrown together a quick prototype to try and establish a few very basic truths regarding what inline JavaScript can do when it is loaded with AJAX: index.html <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script> </head> <body> <script type="text/javascript"> $('p').css('color','white'); alert($('p').css('color')); // DISPLAYS FIRST but is "undefined" $(document).ready(function(){ $('#ajax-loaded-content-wrapper').load('loaded-by-ajax.html', function(){ $('p').css('color','grey'); alert($('p').css('color')); // DISPLAYS LAST (as expected) }); $('p').css('color','purple'); alert($('p').css('color')); // DISPLAYS SECOND }); </script> <p>Content not loaded by ajax</p> <div id="ajax-loaded-content-wrapper"> </div> </body> </html> loaded-by-ajax.html <p>Some content loaded by ajax</p> <script type="text/javascript"> $('p').css('color','yellow'); alert($('p').css('color')); // DISPLAYS THIRD $(document).ready(function(){ $('p').css('color','pink'); alert($('p').css('color')); // DISPLAYS FOURTH }); </script> <p>Some content loaded by ajax</p> <script type="text/javascript"> $(document).ready(function(){ $('p').css('color','blue'); alert($('p').css('color')); // DISPLAYS FIFTH }); $('p').css('color','green'); alert($('p').css('color')); // DISPLAYS SIX </script> <p>Some content loaded by ajax</p> Notes: a) All of the above (except the first) successfully change the colour of all the paragraphs (in firefox 3.6.3). b) I've used alert instead of console.log as console is undefined when called in the 'loaded' HTML. Truths(?): $(document).ready() does not treat the 'loaded' HTML as a new document, or reread the entire DOM tree including the loaded HTML JavaScript that is contained inside 'loaded' HTML can effect the style of existing DOM nodes One can successfully use the jQuery library inside 'loaded' HTML to effect the style of existing DOM nodes One can not use the firebug inside 'loaded' HTML can effect the existing DOM (proven by Note a) Am I correct in deriving these 'truths' from my tests (test validity)? If not, how would you test for these?

    Read the article

  • Web Service to connect to an API and get the response back from the API

    - by Scarlette_June
    This is a general Programming question I'm new to Java Web services programming using Apache Axis and JAX-RPC. We need to build 2 components,a App engine (Shopping cart, Payment Gateway integration etc..) and a UI Control Panel over an existing API. The API understands only XML.How we must communicate with the API? link text We have been asked to write a Web Service to establish the communication. Please provide the steps and a Code example/snippet on how to connect to an existing API through a Webservice and get the response back from the API to the calling Webservice. John,I hope I have been able to explain my query.If you have ideas on how to communicate with the API to get the desired result to the user,Please let us know. We have just started our careers in technology a year back post our graduation and this project is our very first Java EE project.

    Read the article

  • PycURL RESUME_FROM

    - by excid3
    I can't seem to get the RESUME_FROM option to work. Here's some example code that I have been testing with: import os import pycurl import sys def progress(total, existing, upload_t, upload_d): try: frac = float(existing)/float(total) except: frac = 0 sys.stdout.write("\r%s %3i%%" % ("file", frac*100) ) url = "http://launchpad.net/keryx/stable/0.92/+download/keryx_0.92.4.tar.gz" filename = url.split("/")[-1].strip() def test(debug_type, debug_msg): print "debug(%d): %s" % (debug_type, debug_msg) c = pycurl.Curl() c.setopt(pycurl.URL, url) c.setopt(pycurl.FOLLOWLOCATION, 1) c.setopt(pycurl.MAXREDIRS, 5) # Setup writing if os.path.exists(filename): f = open(filename, "ab") c.setopt(pycurl.RESUME_FROM, os.path.getsize(filename)) else: f = open(filename, "wb") c.setopt(pycurl.WRITEDATA, f) #c.setopt(pycurl.VERBOSE, 1) c.setopt(pycurl.DEBUGFUNCTION, test) c.setopt(pycurl.NOPROGRESS, 0) c.setopt(pycurl.PROGRESSFUNCTION, progress) c.perform()

    Read the article

  • 301 versus inline rewrites

    - by Kristoffer S Hansen
    I'm in the process of adding 'pretty' URLs to an existing CMS, the menu is auto generated and the new 'pretty' URLs are to be handled independently as a seperate module. The auto-generated menu allways has URLs that look like this index.php?menu_id=n which ofcourse we would like to see as eg. /news or /products I'm currently at the point where I have to decide if I'm going to rewrite all output of the current system or simply put in a hook where I redirect to the 'pretty' URL. To put it differently, should i connect to the database, fetch all 'pretty' URLs, run through the existing output from WYSIWYG's, news modules, forums etc. and do some str_replace or other string manipulation (which I think would be a rather tedious and boring process), or should I simply hook in and throw a 301 redirecting index.php?menu_id=3 to /news will Google (or other search engines) penalize me for having 301's in the menus?

    Read the article

  • Is there any killer application for Ontology/semantics/OWL/RDF yet?

    - by narnirajesh
    Hi Guys, I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc.. I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of? To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

    Read the article

  • Multiple client projects to one server project w/ Silverlight & RIA Services Beta

    - by Dale Halliwell
    The type or namespace name 'Resources' does not exist in the namespace 'MyWebProject.Web' (are you missing an assembly reference?) C:\Users\...\MySecondProject\Generated_Code\MyWebProject.Web.g.cs I am having some problems trying to add a second SL client project to my (Ria services) SL Business Application. It has to do with the way the shared Resources files on the Web project are linked to from my new SL client project (the SL client project that was generated by the Business App template works fine). The same problem was brought up in the SL forums but copying the Web folder from my existing SL client doesn't seem to work. How can I add a second SL client project using RIA services to the solution of an existing SL Business Application without these problems over shared resources? Should I avoid the Business Application solution template for solutions with multiple SL clients since it seems to presume only a single client app will be sharing the resource files?

    Read the article

  • how to generate dbml file from Sybase database?

    - by 5YrsLaterDBA
    I think we may have trouble with our existing project. For some reasons we have to switch from SQL Server to Sybase SQL Anywhere 11. now we trying to find a way continue use our existing LINQ code. We wish we can still use L2S? If cannot, we wish we can use L2E, then we have to change to ADO. how to generate dbml file from Sybase Anywhere 11? after that can we use sqlmetal to generate .cs files?

    Read the article

  • How do I use .htaccess RewriteRule to change underscores to dashes

    - by soopadoubled
    I'm working on a site, and its CMS used to save new page urls using the underscore character as a word seperator. Despite the fact that Google now treats underscore as a word seperator, the SEO powers that be are demanding the site use dashes instead. This is very easy to do within the CMS, and I can of course change all existing URLs saved in the MySQL database that serves the CMS. My problem lies in writing a .htaccess rule that will 301 old style underscore seperated links to the new style hyphenated verstion. I had success using the answers to this Stack Overflow question on other sites, using: RewriteRule ^([^_]*)_([^_]*_.*) $1-$2 [N] RewriteRule ^([^_]*)_([^_]*)$ /$1-$2 [L,R=301] However this CMS site uses a lot of existing rules to produce clean URLs, and I can't get this working in conjunction with the existing rule set. .htaccess currently looks like this: Options FollowSymLinks # RewriteOptions MaxRedirects=50 RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} !^www\.mydomain\.co\.uk$ [NC] RewriteRule (.*) http://www.mydomain.co.uk/$1 [R=301,L] #trailing slash enforcement RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !# RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.mydomain.co.uk/$1/ [L,R=301] RewriteRule ^test/([0-9]+)(/)?$ test_htaccess.php?year=$1 [nc] RewriteRule ^index(/)?$ index.php RewriteRule ^department/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=ecom.details&mode=$1&$2=$3 [nc] RewriteRule ^department/([^/]*)(/)?$ ecom/index.php?action=ecom.details&mode=$1 [nc] RewriteRule ^product/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=ecom.pdetails&mode=$1&$2=$3 [nc] RewriteRule ^product/([^/]*)(/)?$ ecom/index.php?action=ecom.pdetails&mode=$1 [nc] RewriteRule ^content/([^/]*)(/)?$ ecom/index.php?action=ecom.cdetails&mode=$1 [nc] RewriteRule ([^/]*)/action/([^/]*)/([^/]*)/([^/]*)/([^/]*)(/)?$ $1/index.php?action=$2&mode=$3&$4=$5 [nc] RewriteRule ([^/]*)/action/([^/]*)/([^/]*)(/)?$ $1/index.php?action=$2&mode=$3 [nc] RewriteRule ([^/]*)/action/([^/]*)(/)?$ $1/index.php?action=$2 [nc] RewriteRule ^eaction/([^/]*)/([^/]*)/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=$1&mode=$2&$3=$4 [nc] RewriteRule ^eaction/([^/]*)/([^/]*)(/)?$ ecom/index.php?action=$1&mode=$2 [nc] RewriteRule ^action/([^/]*)/([^/]*)(/)?$ index.php?action=$1&mode=$2 [nc] RewriteRule ^sid/([^/]*)(/)?$ index.php?sid=$1 [nc] ## Error Handling ## #RewriteRule ^error/([^/]*)(/)?$ index.php?action=error&mode=$1 [nc] # ----------------------------------- Content Section ------------------------------ # #RewriteRule ^([^/]*)(/)?$ index.php?action=cms&mode=$1 [nc] RewriteRule ^accessibility(/)?$ index.php?action=cms&mode=accessibility RewriteRule ^terms(/)?$ index.php?action=cms&mode=conditions RewriteRule ^privacy(/)?$ index.php?action=cms&mode=privacy RewriteRule ^memberpoints(/)?$ index.php?action=cms&mode=member_points RewriteRule ^contactus(/)?$ index.php?action=contactus RewriteRule ^sitemap(/)?$ index.php?action=sitemap ErrorDocument 404 /index.php?action=error&mode=content ExpiresDefault "access plus 3 days" All page URLS are in one of the 3 following formats: http://www.mydomain.com/department/some_page_address/ http://www.mydomain.com/product/some_page_address/ http://www.mydomain.com/content/some_page_address/ I'm sure I am missing something obvious, but at this level my regex and mod_rewrite skills clearly aren't up to par. Any ideas would be greatly appreciated!

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >