Search Results

Search found 2602 results on 105 pages for '2phase commit'.

Page 27/105 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • ?Oracle????SELECT????UNDO

    - by Liu Maclean(???)
    ????????Oracle?????(dirty read),?Oracle??????Asktom????????Oracle???????, ???undo??????????(before image)??????Consistent, ???????????????Oracle????????????? ????????? ??,??,Oracle?????????????RDBMS,???????????? ?????????2?????: _offline_rollback_segments or _corrupted_rollback_segments ?2?????????Oracle???????????ORA-600[4XXX]???????????????,???2??????Undo??Corruption????????????,?????2????????????????? ??????????????_offline_rollback_segments ? _corrupted_rollback_segments ?2?????: ???????(FORCE OPEN DATABASE) ????????????(consistent read & delayed block cleanout) ??????rollback segment??? ?????:???????Oracle????????,??????????2?????,?????????????!! _offline_rollback_segments ? _corrupted_rollback_segments ???????????: ??2???????Undo Segments(???/???)????????online ?UNDO$???????????OFFLINE??? ???instance??????????????????? ??????Undo Segments????????active transaction????????????dead??SMON???(????????SMON??(?):Recover Dead transaction) _OFFLINE_ROLLBACK_SEGMENTS(offline undo segment list)????(hidden parameter)?????: ???startup???open database???????_OFFLINE_ROLLBACK_SEGMENTS????Undo segments(???/???),?????undo segments????????alert.log???TRACE?????,???????startup?? ?????????????,?ITL?????undo segments?: ???undo segments?transaction table?????????????????? ???????????commit,?????CR??? ????undo segments????(???corrupted??,???missed??)???????????alert.log,??????? ?DML?????????????????????????????????CPU,????????????????????? _CORRUPTED_ROLLBACK_SEGMENTS(corrupted undo segment list)??????????: ?????startup?open database???_CORRUPTED_ROLLBACK_SEGMENTS????undo segments(???/???)???????? ???????_CORRUPTED_ROLLBACK_SEGMENTS???undo segments????????????commit,???undo segments???drop??? ??????????? ??????????????????,?????????????????? ??bootstrap???????????,?????????ORA-00704: bootstrap process failure??,???????????(???Oracle????:??ORA-00600:[4000] ORA-00704: bootstrap process failure????) ??????_CORRUPTED_ROLLBACK_SEGMENTS????????????????????,??????????????? Oracle???????TXChecker??????????? ???????2?????,??????????????_CORRUPTED_ROLLBACK_SEGMENTS?????SELECT????UNDO???????: SQL> alter system set event= '10513 trace name context forever, level 2' scope=spfile; System altered. SQL> alter system set "_in_memory_undo"=false scope=spfile; System altered. 10513 level 2 event????SMON ??rollback ??? dead transaction _in_memory_undo ?? in memory undo ?? SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. session A: SQL> conn maclean/maclean Connected. SQL> create table maclean tablespace users as select 1 t1 from dual connect by level exec dbms_stats.gather_table_stats('','MACLEAN'); PL/SQL procedure successfully completed. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 1 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processe ???????????,????current block, ????????,consistent gets??3? SQL> update maclean set t1=0; 501 rows updated. SQL> alter system checkpoint; System altered. ??session A?commit; ???? session: SQL> conn maclean/maclean Connected. SQL> SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 505 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? ?????????undo??CR?,???consistent gets??? 505 [oracle@vrh8 ~]$ ps -ef|grep LOCAL=YES |grep -v grep oracle 5841 5839 0 09:17 ? 00:00:00 oracleG10R25 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) [oracle@vrh8 ~]$ kill -9 5841 ??session A???Server Process????,???dead transaction ????smon?? select ktuxeusn, to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') "Time", ktuxesiz, ktuxesta from x$ktuxe where ktuxecfl = 'DEAD'; KTUXEUSN Time KTUXESIZ KTUXESTA ---------- -------------------- ---------- ---------------- 2 06-AUG-2012 09:20:45 7 ACTIVE ???1?active rollback segment SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 501 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 411 consistent gets 0 physical reads 108 redo size 515 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ????? ????kill?? ???smon ??dead transaction , ???????????? ?????undo??????? ????active?rollback segment??? SQL> select segment_name from dba_rollback_segs where segment_id=2; SEGMENT_NAME ------------------------------ _SYSSMU2$ SQL> alter system set "_corrupted_rollback_segments"='_SYSSMU2$' scope=spfile; System altered. ? _corrupted_rollback_segments ?? ???2?rollback segment, ????????undo SQL> startup force; ORACLE instance started. Total System Global Area 3140026368 bytes Fixed Size 2232472 bytes Variable Size 1795166056 bytes Database Buffers 1325400064 bytes Redo Buffers 17227776 bytes Database mounted. Database opened. SQL> conn maclean/maclean Connected. SQL> set autotrace on; SQL> select sum(t1) from maclean; SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 228 recursive calls 0 db block gets 29 consistent gets 5 physical reads 116 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 4 sorts (memory) 0 sorts (disk) 1 rows processed SQL> / SUM(T1) ---------- 94 Execution Plan ---------------------------------------------------------- Plan hash value: 1679547536 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 3 | 3 (0)| 00:00:01 | | 1 | SORT AGGREGATE | | 1 | 3 | | | | 2 | TABLE ACCESS FULL| MACLEAN | 501 | 1503 | 3 (0)| 00:00:01 | ------------------------------------------------------------------------------ Statistics ---------------------------------------------------------- 0 recursive calls 0 db block gets 3 consistent gets 0 physical reads 0 redo size 514 bytes sent via SQL*Net to client 492 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 1 rows processed ?????? consistent gets???3,?????????????????,??ITL???UNDO SEGMENTS?_corrupted_rollback_segments????,???????????COMMIT??,????UNDO? ???????,?????????????????????????(????????????????????),????????????????? ???? , ?????

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • Delegation of Solaris Zone Administration

    - by darrenm
    In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses finegrained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl.  We want to grant a subset of the zone management to each of them.We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly: First lets look at an example of storing the data with the zone. # zonecfg -z zoneA zonecfg:zoneA> add admin zonecfg:zoneA> set user=alice zonecfg:zoneA> set auths=manage zonecfg:zoneA> end zonecfg:zoneA> commit zonecfg:zoneA> exit Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example: # usermod -P +Zone Management -A +solaris.zone.manage/zoneA alice # usermod -A +solaris.zone.login/zoneB alice # usermod -P +Zone Management-A +solaris.zone.manage/zoneB bob # usermod -A +solaris.zone.manage/zoneC bob # usermod -P +Zone Management-A +solaris.zone.manage/zoneC carl # usermod -A +solaris.zone.manage/zoneD carl # usermod -A +solaris.zone.manage/zoneE carl # usermod -A +solaris.zone.manage/zoneF carl In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF.  The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do. # profiles -p 'Zone Group 1' profiles:Zone Group 1> set desc="Zone Group 1" profiles:Zone Group 1> add profile="Zone Management" profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA profiles:Zone Group 1> add auths=solaris.zone.login/zoneB profiles:Zone Group 1> commit profiles:Zone Group 1> exit # profiles -p 'Zone Group 3' profiles:Zone Group 1> set desc="Zone Group 3" profiles:Zone Group 1> add profile="Zone Management" profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF profiles:Zone Group 1> commit profiles:Zone Group 1> exit Now instead of granting carl  and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile. # usermod -P +'Zone Group 3' carl # usermod -P +'Zone Group 1' alice If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands. For a documentation overview see the description of the "admin" resource in zonecfg(1M), profiles(1) and usermod(1M)

    Read the article

  • Ubuntu 12.04 LXC nat prerouting not working

    - by petermolnar
    I have a running Debian Wheezy setup I copied exactly to an Ubuntu 12.04 ( elementary OS, used as desktop as well ) While the Debian setup runs flawlessly, the Ubuntu version dies on the prerouting to containers ( or so it seems ) In short: lxc works containers work and run connecting to container from host OK ( including mixed ports & services ) connecting to outside world from container is fine What does not work is connecting from another box to the host on a port that should be NATed to a container. The setups: /etc/rc.local CMD_BRCTL=/sbin/brctl CMD_IFCONFIG=/sbin/ifconfig CMD_IPTABLES=/sbin/iptables CMD_ROUTE=/sbin/route NETWORK_BRIDGE_DEVICE_NAT=lxc-bridge HOST_NETDEVICE=eth0 PRIVATE_GW_NAT=192.168.42.1 PRIVATE_NETMASK=255.255.255.0 PUBLIC_IP=192.168.13.100 ${CMD_BRCTL} addbr ${NETWORK_BRIDGE_DEVICE_NAT} ${CMD_BRCTL} setfd ${NETWORK_BRIDGE_DEVICE_NAT} 0 ${CMD_IFCONFIG} ${NETWORK_BRIDGE_DEVICE_NAT} ${PRIVATE_GW_NAT} netmask ${PRIVATE_NETMASK} promisc up Therefore lxc network is 192.168.42.0/24 and the host eth0 ip is 192.168.13.100; setup via network manager as static address. iptables: *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT *filter :FORWARD ACCEPT [0:0] :INPUT DROP [0:0] :OUTPUT ACCEPT [0:0] # Accept traffic from internal interfaces -A INPUT -i lo -j ACCEPT # accept traffic from lxc network -A INPUT -d 192.168.42.1 -s 192.168.42.0/24 -j ACCEPT # Accept internal traffic Make sure NEW incoming tcp connections are SYN # packets; otherwise we need to drop them: -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Packets with incoming fragments drop them. This attack result into Linux server panic such data loss. -A INPUT -f -j DROP # Incoming malformed XMAS packets drop them: -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Incoming malformed NULL packets: -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Accept traffic with the ACK flag set -A INPUT -p tcp -m tcp --tcp-flags ACK ACK -j ACCEPT # Allow incoming data that is part of a connection we established -A INPUT -m state --state ESTABLISHED -j ACCEPT # Allow data that is related to existing connections -A INPUT -m state --state RELATED -j ACCEPT # Accept responses to DNS queries -A INPUT -p udp -m udp --dport 1024:65535 --sport 53 -j ACCEPT # Accept responses to our pings -A INPUT -p icmp -m icmp --icmp-type echo-reply -j ACCEPT # Accept notifications of unreachable hosts -A INPUT -p icmp -m icmp --icmp-type destination-unreachable -j ACCEPT # Accept notifications to reduce sending speed -A INPUT -p icmp -m icmp --icmp-type source-quench -j ACCEPT # Accept notifications of lost packets -A INPUT -p icmp -m icmp --icmp-type time-exceeded -j ACCEPT # Accept notifications of protocol problems -A INPUT -p icmp -m icmp --icmp-type parameter-problem -j ACCEPT # Respond to pings, but limit -A INPUT -m icmp -p icmp --icmp-type echo-request -m state --state NEW -m limit --limit 6/s -j ACCEPT # Allow connections to SSH server -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m limit --limit 12/s -j ACCEPT COMMIT *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 2221 -m state --state NEW -m limit --limit 12/s -j DNAT --to-destination 192.168.42.11:22 -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 80 -m state --state NEW -m limit --limit 512/s -j DNAT --to-destination 192.168.42.11:80 -A PREROUTING -d 192.168.13.100 -p tcp -m tcp --dport 443 -m state --state NEW -m limit --limit 512/s -j DNAT --to-destination 192.168.42.11:443 -A POSTROUTING -d 192.168.42.0/24 -o eth0 -j SNAT --to-source 192.168.13.100 -A POSTROUTING -o eth0 -j MASQUERADE COMMIT sysctl: net.ipv4.conf.all.forwarding = 1 net.ipv4.conf.all.mc_forwarding = 0 net.ipv4.conf.default.forwarding = 1 net.ipv4.conf.default.mc_forwarding = 0 net.ipv4.ip_forward = 1 I've set up full iptables log on the container; none of the packets addressed to 192.168.13.100, port 80 is reaching the container. I've even tried different kernels ( server kernel, raring lts kernel, etc ), modprobe everything iptables & nat related, nothing. Any ideas?

    Read the article

  • ??GoldenGate?LAG???

    - by Liu Maclean(???)
    GGSCI????LAG?? ????????????????Oracle?redo????online redo logfile? ? Replicat????????????????? ???????? ????,?????????????????LAG; ????????????????REPLICAT??apply???????????? OGG????RANGE??????????,????????REPLICATE??APPLY? OGG??MAXTRANSOPS???????? LAG?????????: ?Extract?????redolog????TRAIL?REMOTE HOST ????datapump???extract trail????????????REMOTE HOST ?collector?????????????????LOCAL TRAIL ?REPLICAT??LOCAL TRAIL???????? ?????????GGSCI?INFO?STATUS??????LAG,???SEND ???,LAG?????LAG?????: INFO??????LAG???SEND??????????? INFO?????LAG???MANAGER????????checkpoint SEND <OBJECT>, lag???LAG???<OBJECT>???????????? LAG?????????????????Kilobytes??? ????LAG??? ????????????? ? EXTRACT/PUMP/REPLICAT???????? ?2?????????, ???? LAG???EXTRACT??????? ??EXTRACT/PUMP/REPLICAT??????????????? REAL TIME,???LAG????? ?????????????? ????????REDO LOG?????????,?LAG???ER???????,?????????????? ??????,STOP EXTRACT?????????????????LAG,????EXTRACT?????,??EXTRACT????????? ????REDO LOG???? ?EXTRACT??????????????????? GGSCI (XIANGBLI-CN) 27> stop load2 Sending STOP request to EXTRACT LOAD2 … Request processed. GGSCI (XIANGBLI-CN) 28> start load2 Sending START request to MANAGER … EXTRACT LOAD2 starting GGSCI (XIANGBLI-CN) 31> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:04:34 (updated 00:00:08 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:21:32  Seqno 44, RBA 13750272 SCN 0.1845479 (1845479) GGSCI (XIANGBLI-CN) 35> lag load2 Sending GETLAG request to EXTRACT LOAD2 … Last record lag: 130 seconds. At EOF, no more records to process. GGSCI (XIANGBLI-CN) 36> info load2 EXTRACT    LOAD2     Last Started 2012-09-18 20:26   Status RUNNING Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Log Read Checkpoint  Oracle Redo Logs 2012-09-18 20:27:33  Seqno 44, RBA 13817856 SCN 0.1845671 (1845671) ?????? Last record lag ? Checkpoint Lag ???? EXTRACT/PUMP/REPLICAT ?????????????(catch up), ???? ?????????????GB?redo???,??????EXTRACT/PUMP/REPLICAT ????????? ???INFO?LAG???checkpoint?,????????????Long Running Transactions (LRTs),??????????COMMIT? ????????????????????????COMMIT?????? ????EXTRACT/PUMP/REPLICAT???????????????????????commit????? ??REPLICAT????MAXTRANSOPS ?????LAG?

    Read the article

  • NServiceBus and NHibernate - Message Handler and Transactions

    - by mattcodes
    From my understanding NServiceBus executes the Handle method of an IMessageHandler within a transaction, if an exception propagates out of this method, then NServiceBus will ensure the message is put back on the message queue (up X amount of times before error queue) etc.. so we have an atomic operation so to speak. Now when if I inside my NServiceBus Message Handle method I do something like this using(var trans = session.BeginTransaction()) { person.Age = 10; session.Update<Person>(person); trans.Commit() } using(var trans2 = session.BeginTransaction()) { person.Age = 20; session.Update<Person>(person); // throw new ApplicationException("Oh no"); trans2.Commit() } What is the effect of this on the transaction scope? Is trans1 now counted as a nested transaction in terms of its relationship with the Nservicebus transaction even though we have done nothing to marry them up? (if not how would one link onto the transaction of NServiceBus? Looking at the second block (trans2), if I uncomment the throw statement, will the NServiceBus transaction then rollback trans1 as well? In basic scenarios, say I dump the above into a console app, then trans1 is independent, commit, flushed and won't rollback. I'm trying to clarify what happens now we sit in someone else's transaction like NServiceBus? The above is just example code, im wouldnt be working directly with session, more like through a uow pattern.

    Read the article

  • HSQLdb permissions regarding OpenJPA

    - by wishi_
    Hi! I'm (still) having loads of issues with HSQLdb & OpenJPA. Exception in thread "main" <openjpa-1.2.0-r422266:683325 fatal store error> org.apache.openjpa.persistence.RollbackException: user lacks privilege or object not found: OPENJPA_SEQUENCE_TABLE {SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ?} [code=-5501, state=42501] at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:523) at model_layer.EntityManagerHelper.commit(EntityManagerHelper.java:46) at HSQLdb_mvn_openJPA_autoTables.App.main(App.java:23) The HSQLdb is running as a server process, bound to port 9001 at my local machine. The user is SA. It's configured as follows: <persistence-unit name="HSQLdb_mvn_openJPA_autoTablesPU" transaction-type="RESOURCE_LOCAL"> <provider> org.apache.openjpa.persistence.PersistenceProviderImpl </provider> <class>model_layer.Testobjekt</class> <class>model_layer.AbstractTestobjekt</class> <properties> <property name="openjpa.ConnectionUserName" value="SA" /> <property name="openjpa.ConnectionPassword" value=""/> <property name="openjpa.ConnectionDriverName" value="org.hsqldb.jdbc.JDBCDriver" /> <property name="openjpa.ConnectionURL" value="jdbc:hsqldb:hsql://localhost:9001/mydb" /> <!-- <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=true)" /> --> </properties> </persistence-unit> I have made a successful connection with my ORM layer. I can create and connect to my EntityManager. However each time I use EntityManagerHelper.commit(); It faila with that error, which makes no sense to me. SA is the Standard Admin user I used to create the table. It should be able to persist as this user into hsqldb.

    Read the article

  • Custom Installer class , rollback method never called.

    - by yossi1981
    Hi guys. I am having an installer class , Here is a snippet: [RunInstaller(true)] public partial class ServerWrapInstaller : Installer { public override void Install(IDictionary stateSaver) { EventLog.WriteEntry("Installer", "Install", EventLogEntryType.Information); base.Install(stateSaver); } public override void Commit(IDictionary savedState) { EventLog.WriteEntry("Installer", "Commit", EventLogEntryType.Information); base.Commit(savedState); } public override void Rollback(IDictionary savedState) { EventLog.WriteEntry("Installer", "Rollback", EventLogEntryType.Information); base.Rollback(savedState); } public override void Uninstall(IDictionary savedState) { EventLog.WriteEntry("Installer", "UnInstall", EventLogEntryType.Information); base.Uninstall(savedState); } } Now i start the installation in full GUI mode and then click the "Cancel" button in the middle of the process causing the installation to roll back. The problem is that the RollBack method is not called. I don't see the expected entry in the event log. I want to mention that if i let the installation to complete , I do see the "Install" message in the event log and If i then uninstall , I see the "uninstall" message in the event log. But if stop the installtion process in the middle , by pressing the "cancel" button , I do see the progress bar going backward , but the rollback method is not called. what am I doing wrong ? thanks in advance for any help. Edit: Providing more details... The installer is an MSI package. The package is built in vs2009 using a setup project. The installer class is used as a custom action by the setup project. Since this is a MSI Package I have an option to run it in silent mode or in user-interactive more . When I wrote "Full GUI mode" , I ment User-Interactive mode.

    Read the article

  • In Mercurial, what is the exact step that Peter or me has to do so that he gets back the rolled back

    - by Jian Lin
    The short question is: if I hg rollback, how does Peter get my rolled back version if he cloned from me? What are the exact steps he or me has to do or type? This is related to http://stackoverflow.com/questions/3034793/in-mercurial-when-peter-hg-clone-me-and-i-commit-and-he-pull-and-update-he-g The details: After the following steps, Mary has 7 and Peter has 11. My repository is 7 What are the exact steps Peter or me has to do or type SO THAT PETER GETS 7 back? F:\>mkdir hgme F:\>cd hgme F:\hgme>hg init F:\hgme>echo the code is 7 > code.txt F:\hgme>hg add code.txt F:\hgme>hg commit -m "this is version 1" F:\hgme>cd .. F:\>hg clone hgme hgpeter updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgpeter F:\hgpeter>type code.txt the code is 7 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>notepad code.txt [now i change 7 to 11] F:\hgme>hg commit -m "this is version 2" F:\hgme>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files (run 'hg update' to get a working copy) F:\hgpeter>hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>hg rollback rolling back last transaction F:\hgme>cd .. F:\>hg clone hgme hgmary updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgmary F:\hgmary>type code.txt the code is 7 F:\hgmary>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes no changes found F:\hgpeter>hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>

    Read the article

  • Boto - How to delete a record set from route53 -Tried to delete resource record set but it was not found

    - by Tampa
    I am using the following to delete route53 records. I get no error messages. conn = Route53Connection(aws_access_key_id, aws_secret_access_key) changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("DELETE",sub_domain, "A", 60,weight=weight,identifier=identifier) change.add_value(ip_old) changes.commit() all required fields are present and they match..weight, identifier, ttl=60 etc.\ e.g. test.com. A 111.111.111.111 60 1 id1 test.com. A 111.111.111.222 60 1 id2 I want to delete 111.111.111.222 and the record set. So, what is the proper way to delete a record set? For a record set, I will have multiple values that are distinguished by a unique identifier. When an ip address becomes in active I want to remove from route53. I am using a a poor mans load balancing. Here is the meta of the record want to delete. {'alias_dns_name': None, 'alias_hosted_zone_id': None, 'identifier': u'15754-1', 'name': u'hui.com.', 'resource_records': [u'103.4.xxx.xxx'], 'ttl': u'60', 'type': u'A', 'weight': u'1'} Traceback (most recent call last): File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 353, in <module> deleteRedisSubDomains(aws_access_key_id, aws_secret_access_key,platform=platform,sub_domain=sub_domain,redis_domain=redis_domain,zone_id=zone_id,ip_address=ip_address,weight=1,identifier=identifier) File "/home/ubuntu/workspace/rtbopsConfig/classes/redis_ha.py", line 341, in deleteRedisSubDomains changes.commit() File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/record.py", line 131, in commit return self.connection.change_rrsets(self.hosted_zone_id, self.to_xml()) File "/usr/local/lib/python2.7/dist-packages/boto-2.3.0-py2.7.egg/boto/route53/connection.py", line 291, in change_rrsets body) boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request <?xml version="1.0"?> <ErrorResponse xmlns="https://route53.amazonaws.com/doc/2011-05-05/"><Error><Type>Sender</Type><Code>InvalidChangeBatch</Code><Message>Tried to delete resource record set hui.com., type A, SetIdentifier 15754-1 but it was not found</Message></Error><RequestId>9972af89-cb69-11e1-803b-7bde5b9c457d</RequestId></ErrorResponse> Thanks

    Read the article

  • git push heroku master gives error ssh: connect to host heroku.com port 22: Connection refused

    - by user1476508
    I'm trying to run the heroku-django tutorial (using ubuntu 12.04) and it seems for some reason i cant push into heroku. here is what happens: yeinhorn@ubuntu:~/hellodjango$ git init Reinitialized existing Git repository in /home/yeinhorn/hellodjango/.git/ yeinhorn@ubuntu:~/hellodjango$ git add . yeinhorn@ubuntu:~/hellodjango$ git commit -m "my first commit" On branch master nothing to commit (working directory clean) yeinhorn@ubuntu:~/hellodjango$ heroku create Creating high-dusk-6308... done, stack is cedar http://high-dusk-6308.herokuapp.com/ | [email protected]:high-dusk-6308.git ! New default stack: Cedar. To use Bamboo, run heroku create -s bamboo. yeinhorn@ubuntu:~/hellodjango$ git remote -v heroku [email protected]:blazing-dusk-8587.git (fetch) heroku [email protected]:blazing-dusk-8587.git (push) yeinhorn@ubuntu:~/hellodjango$ git push heroku master ssh: connect to host heroku.com port 22: Connection refused fatal: The remote end hung up unexpectedly yeinhorn@ubuntu:~/hellodjango$ git push -f heroku ssh: connect to host heroku.com port 22: Connection refused fatal: The remote end hung up unexpectedly also when i run $telnet heroku.com 22 i get Trying 50.19.85.132... Trying 50.19.85.154... Trying 50.19.85.156... telnet: Unable to connect to remote host: Connection refused any ideas?

    Read the article

  • Question about 'git branching'

    - by michael
    Hi, I read this about git branch: http://book.git-scm.com/3_basic_branching_and_merging.html so I follow it and create 1 branch : experimental And I 1. switch to experimental branch (git checkout experimental) 2. make a bunch of changes 3. commit it (git commit -a) 4. switch to master branch (git checkout master) 5. make some changes and commit there 6. switch back to experimental (git checkout experimental) 7. merge master change to experimental (git merge master) 8. there are some conflicts but after I resolve them, I did 'git add myfile' And now i am stuck, I can't move back to master when I do $ git checkout master error: Entry 'res/layout/my_item.xml' would be overwritten by merge. Cannot merge. and I did: $ git rebase --abort No rebase in progress? and I did : $ git add res/layout/socialhub_list_item.xml $ git checkout master error: Entry 'res/layout/my_item.xml' would be overwritten by merge. Cannot merge. What can I do so that I can go back to my master branch? Thank you.

    Read the article

  • Pushing to bare Git repository (remote) causes it to stop being bare

    - by NSD
    I have a local repository called TestRepo. I clone it with the --bare option, zip this clone up, and throw it on my server. Unzip it, and it's still bare. I then clone the bare remote repository locally over ssh with something like git clone ssh://[email protected]/~/TestRepo.git TestRepoCloned The local TestRepoCloned is not bare and has a remote called "origin." It appears to be tracking correctly from the looks of its config file [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = ssh://[email protected]/~/TestRepo.git [branch "master"] remote = origin merge = refs/heads/master I edit an existing file. I commit the change to the current branch (master) via git commit -a -m "Edited a file." The commit succeeds and all is well. I decide to push this change to the remote repository via SSH with a git push The remote repository is now no longer bare, but has a complete working directory, and I get continuous error messages on all further attempts to push to it. Everything I've read seems to suggest that what I'm doing is correct, but it simply is not working. How am I supposed to push changes to a bare remote repo and actually keep it bare?

    Read the article

  • Distributed development systems

    - by Nathan Adams
    I am interested in a system that allows for distributed development with an authentication piece. What do I mean by that? Ok so lets take SVN, SVN keeps track of revisions and doesn't care who submits, as long as you have the right to submit you can submit, really, to any part in the repository. Where does my system come into play? Being able to granulate access control and give a stackoverflow like feel to the environment. In the system I am describing we have 4 users Bob, Alice, Dan, Joe. Bob is a project managed, Alice and Dan are programmers under Bob and Joe is a random programmer on the internet who wants to help. Ideally in this system, Bob can commit any changes and won't require approval. Alice and Dan can commit to their branches, or a branch, but a commit to the trunk would need approval by Bob. This is where Joe comes in, wants to help, however, you just don't want to give him the keys to the kingdom just yet so to speak, so in my system you would setup a "low user" account. Any commits that Joe makes would need to be approved by Dan, Alice or both. However, in the system, Joe can build up "Karma" where after so many approved commits it would only need approval by one of the programmers, and then eventually no approval would be necessary. Does that make sense and do you know if a system like that exists? Or am I just crazy to even think such a system/environment would be possible?

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • Adding changes from one Mercurial repository to another

    - by Patrik Hägne
    When changing the VCS for my project FakeItEasy from SVN to Mercurial on Google Code I was a bit too eager (I'm funny like that). What I did was just checking the latest version out of SVN and then commiting that checkout as the first revision of the new Mercurial repo. This obviously has the effect that all history is lost. Later when getting a bit better acustomed to Mercurial I realized that there is such a thing as a "convert extension" that allows you to convert a SVN repo into a Mercurial repo. Now what I want to do is to convert the old SVN repo and then have all change sets from the currently existing Mercurial repo imported into this converted repo except the very first commit to Mercurial. I've converted the SVN repo to a local Mercurial repo but now is when I'm stuck. I thought I'd be able to use the convert extension to bring the current Mercurial repository into the converted one and having a splice map remove the first commit but I can not seem to get this to work. I've also tried to just use convert without splice map to get all change sets from the current Mercurial repo into the converted one and the rebase the second version in the current to the last commit from the old SVN repository but I can't get that to work either. To make this clearer lets say I have these two repositories: A: revA1-revA2 B: revB1-revB2-revB3 (Where revB1 is actually a copy of revA2) Now I want to combine these two into the new repository containing this: C: revA1-revA2-revB2-revB3

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • How can I setup Hudson to use the same repository for different projects and maintain separate chang

    - by Allen
    I typically setup SVN to host 1 big project per repository but a lot of our infrastructure has changed and we now have one main SVN server that has a hierarchy like so Branches Tags Trunk Project1 files & folders Project2 files & folders Project3 files & folders Projects1,2, and 3 do not share anything amongst themselves, they are independent projects each with their own solution file to be built. I can setup projects in Hudson like so Repository Url: http://server/svn/MainRepository Local module directory (optional): /Trunk/Project1 And that will maintain a separate workspace for each project, but every time you commit to Project 2 or Project 3, a build gets kicked off in Hudson for every project based in that repository. Also, any commit made anywhere in the repository is pulled down and inserted into the Hudson changelog for all of them. I know the easiest solution would be to simply separate every project into its own repository. However, if I couldn't do that due to various reasons, is there a feasible way to achieve the functionality that having separate repositories gets me? I want commits to the sub folder of project 1 to only affect project 1. No other project's commits should cause project 1 to build and project 1's changelog in Hudson should only have commit notes from project 1.

    Read the article

  • git-svn guestion about creating local branches

    - by leeed25d
    Is there a way to create a local branch, or modify an existing local branch, in such a way that it cannot be dcommit'ed to the svn repo? Here's a description of the scenario. git checkout -b local.farBranch remotes/farBranch git checkout -b patched.local.farBranch git merge local.patches <work on patched branch && test> <do not commit onto patched.local.farBranch> git checkout local.farBranch git commit -am "some changes" git rebase local.farBranch patched.local.farBranch <another work test cycle> git checkout local.farBranch git commit -am "last changes" git svn dcommit Now, I never want to dcommit patched.local.farBranch (which is tracking remotes/farBranch) because that would put my local patches into the SVN repository. This is not a fatal problem but it is a pain in the keester because the patch has to be removed when the SVN farBranch is eventally (SVN) merged onto the trunk. So what I am looking for is a way to prevent this git checkout patched.local.farBranch git svn dcommit <<== ERROR git checkout local.farBranch git svn dcommit <<== OK

    Read the article

  • Git graph with ref logs

    - by Francisco Garcia
    I am trying to improve my custom git log format string. I have almost everything I want except the ref names. I can already get a log similar to what I want: > git log --all --source --pretty=oneline --graph * b7c7ad3855b54e94ad7ac03f2d2e5b96d6e5ac1d refs/heads/b1 na | * 695e1482622a79230fa1d83afb8d70e86847334a refs/heads/master Merge branch 'b1' | |\ | |/ |/| * | ec21f370f82096c0208f43b390da234d92e8c74a refs/heads/b1 beta * | c6bc1f55ab3b1bd568493a5de4298dfcb4f66d8d refs/heads/b1 alfa * | 762dd868ae87753afc1cbf9803744c76f9a9e121 refs/heads/b1 tango | * 57fb27bff06ee9bb569f93ba815e9dcd69521c13 refs/heads/master little last post commit |/ | * 8d613d09b43152a7263b6e02d47ec8a4304f54be refs/heads/b3 the other commit | * e1f32b7cb86633351df06e37c2c58ef3f9fafc40 refs/heads/b3 something |/ | * 01b5c6728cf25dd576733211ce75dd3ecc29c7ba refs/heads/b2 this time a I am fighting to get a customized output with my own format string like this: > git log --pretty=format:'%h - %gD %s' --source -g b7c7ad3 - HEAD@{0} na ec21f37 - HEAD@{1} beta 01b5c67 - HEAD@{2} this time a 01b5c67 - HEAD@{3} this time a 695e148 - HEAD@{4} Merge branch 'b1' 57fb27b - HEAD@{5} little last post commit My main problem is that I cannot get the ref names I want. I assume it is one of the %g? format strings, but none of them seem to give me the full ref name. Another problem is that the %g? format strings are empty unless I walk the reflogs (-g). However git refuses to combine --graph with -g How can reproduce the first sample with a format string which I can further customize?

    Read the article

  • committing to a branch that's not checked out

    - by intuited
    I'm using git to version my home directories on a couple different machines. I'd like for them to each use separate branches and both pull from a common branch. So most commits should be made to that common branch, unless something specific to that machine is being committed, in which case the commit should go to the checked out, machine-specific branch. Switching branches is clearly not a very good option in this case. It's mentioned in this post that what I want to do is impossible, but I found that answer to be rather blunt and to perhaps not take into account the possibility of using the plumbing commands. Unfortunately I don't have enough reputation to comment on that thread. I rather suspect that there is some way to do this and am hoping to save myself an hour or few of questing for the answer by just asking you good folk. So is it possible to commit to a different branch without checking that branch out first? Ideally I'd like to use the index in the same way that git commit normally does.

    Read the article

  • DBTransactions between stateless calls using GUIDs

    - by Marty Trenouth
    I'm looking to add transactional support to my DB engine and providing to Abstract Transaction Handling down to passing in Guids with the DB Action Command. The DB engine would run similar to: private static Database DB; public static Dictionary<Guid,DBTransaction> Transactions = new ...() public static void DoDBAction(string cmdstring,List<Parameter> parameters,Guid TransactionGuid) { DBCommand cmd = BuildCommand(cmdstring,parameters); if(Transactions.ContainsKey(TransactionGuid)) cmd.Transaction = Transactions[TransactionGuid]; DB.ExecuteScalar(cmd); } public static BuildCommand(string cmd, List<Parameter> parameters) { // Create DB command from EntLib Database and assign parameters } public static Guid BeginTransaction() { // creates new Transaction adding it to "Transactions" and opens a new connection } public static Guid Commit(Guid g) { // Commits Transaction and removes it from "Transactions" and closes connection } public static Guid Rollback(Guid g) { // Rolls back Transaction and removes it from "Transactions" and closes connection } The Calling system would run similar to: Guid g try { g = DBEngine.BeginTransaction() DBEngine.DoDBAction(cmdstring1, parameters,g) // do some other stuff DBEngine.DoDBAction(cmdstring2, parameters2,g) // sit here and wait for a response from other item DBEngine.DoDBAction(cmdstring3, parameters3,g) DBEngine.Commit(g) } catch(Exception){ DBEngine.Rollback(g);} Does this interfere with .NET connection pooling (other than a connection be accidently left open)? Will EntLib keep the connection open until the commit or rollback?

    Read the article

  • Access DB Transaction Insert limit

    - by user986363
    Is there a limit to the amount of inserts you can do within an Access transaction before you need to commit or before Access/Jet throws an error? I'm currently running the following code in hopes to determine what this maximum is. OleDbConnection cn = new OleDbConnection( @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\temp\myAccessFile.accdb;Persist Security Info=False;"); try { cn.Open(); oleCommand = new OleDbCommand("BEGIN TRANSACTION", cn); oleCommand.ExecuteNonQuery(); oleCommand.CommandText = "insert into [table1] (name) values ('1000000000001000000000000010000000000000')"; for (i = 0; i < 25000000; i++) { oleCommand.ExecuteNonQuery(); } oleCommand.CommandText = "COMMIT"; oleCommand.ExecuteNonQuery(); } catch (Exception ex) { } finally { try { oleCommand.CommandText = "COMMIT"; oleCommand.ExecuteNonQuery(); } catch{} if (cn.State != ConnectionState.Closed) { cn.Close(); } } The error I received on a production application when I reached 2,333,920 inserts in a single uncommited transaction was: "File sharing lock count exceeded. Increase MaxLocksPerFile registry entry". Disabling transactions fixed this problem.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >