Search Results

Search found 3479 results on 140 pages for 'sequence diagram'.

Page 105/140 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Where’s my MD.050?

    - by Dave Burke
    A question that I’m sometimes asked is “where’s my MD.050 in OUM?” For those not familiar with an MD.050, it serves the purpose of being a Functional Design Document (FDD) in one of Oracle’s legacy Methods. Functional Design Documents have existed for many years with their primary purpose being to describe the functional aspects of one or more components of an IT system, typically, a Custom Extension of some sort. So why don’t we have a direct replacement for the MD.050/FDD in OUM? In simple terms, the disadvantage of the MD.050/FDD approach is that it tends to lead practitioners into “Design mode” too early in the process. Whereas OUM encourages more emphasis on gathering, and describing the functional requirements of a system ahead of the formal Analysis and Design process. So that just means more work up front for the Business Analyst or Functional Consultants right? Well no…..the design of a solution, particularly when it involves a complex custom extension, does not necessarily take longer just because you put more thought into the functional requirements. In fact, one could argue the complete opposite, in that by putting more emphasis on clearly understanding the nuances of functionality requirements early in the process, then the overall time and cost incurred during the Analysis to Design process should be less. In short, as your understanding of requirements matures over time, it is far easier (and more cost effective) to update a document or a diagram, than to change lines of code. So how does that translate into Tasks and Work Products in OUM? Let us assume you have reached a point on a project where a Custom Extension is needed. One of the first things you should consider doing is creating a Use Case, and remember, a Use Case could be as simple as a few lines of text reflecting a “User Story”, or it could be what Cockburn1 describes a “fully dressed Use Case”. It is worth mentioned at this point the highly scalable nature of OUM in the sense that “documents” should not be produced just because that is the way we have always done things. Some projects may well be predicated upon a base of electronic documents, whilst other projects may take a much more Agile approach to describing functional requirements; through “User Stories” perhaps. In any event, it is quite common for a Custom Extension to involve the creation of several “components”, i.e. some new screens, an interface, a report etc. Therefore several Use Cases might be required, which in turn can then be assembled into a Use Case Package. Once you have the Use Cases attributed to an appropriate (fit-for-purpose) level of detail, and assembled into a Package, you can now create an Analysis Model for the Package. An Analysis Model is conceptual in nature, and depending on the solution being developing, would involve the creation of one or more diagrams (i.e. Sequence Diagrams, Collaboration Diagrams etc.) which collectively describe the Data, Behavior and Use Interface requirements of the solution. If required, the various elements of the Analysis Model may be indexed via an Analysis Specification. For Custom Extension projects that follow a pure Object Orientated approach, then the Analysis Model will naturally support the development of the Design Model without any further artifacts. However, for projects that are transitioning to this approach, then the various elements of the Analysis Model may be represented within the Analysis Specification. If we now return to the original question of “Where’s my MD.050”. The full answer would be: Capture the functional requirements within a Use Case Group related Use Cases into a Package Create an Analysis Model for each Package Consider creating an Analysis Specification (AN.100) as a index to each Analysis Model artifact An alternative answer for a relatively simple Custom Extension would be: Capture the functional requirements within a Use Case Optionally, group related Use Cases into a Package Create an Analysis Specification (AN.100) for each package 1 Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1

    Read the article

  • fsck on LVM snapshots

    - by Alpha01
    I'm trying to do some file system checks using LVM snapshots of our Logical Volumes to see if any of them have dirty file systems. The problem that I have is that our LVM only has one Volume Group with no available space. I was able to do fsck's on some of the logical volumes using a loopback file system. However my question is, is it possible to create a 200GB loopback file system, and saved it on the same partition/logical volume that I'll be taking a snapshot of? Is LVM smart enough to not take a snapshot copy of the actual snapshot? [root@server z]# vgdisplay --- Volume group --- VG Name Web2-Vol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 29 VG Access read/write VG Status resizable MAX LV 0 Cur LV 6 Open LV 6 Max PV 0 Cur PV 1 Act PV 1 VG Size 544.73 GB PE Size 4.00 MB Total PE 139450 Alloc PE / Size 139450 / 544.73 GB Free PE / Size 0 / 0 VG UUID BrVwNz-h1IO-ZETA-MeIf-1yq7-fHpn-fwMTcV [root@server z]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.7G 3.6G 5.6G 40% / /dev/sda1 251M 29M 210M 12% /boot /dev/mapper/Web2--Vol-var 12G 1.1G 11G 10% /var /dev/mapper/Web2--Vol-var--spool 12G 184M 12G 2% /var/spool /dev/mapper/Web2--Vol-var--lib--mysql 30G 15G 14G 52% /var/lib/mysql /dev/mapper/Web2--Vol-usr 13G 3.3G 8.9G 27% /usr /dev/mapper/Web2--Vol-z 468G 197G 267G 43% /z /dev/mapper/Web2--Vol-tmp 3.0G 76M 2.8G 3% /tmp tmpfs 7.9G 92K 7.9G 1% /dev/shm The logical volume in question is /dev/mapper/Web2--Vol-z. I'm afraid if I created the loopback file system in /dev/mapper/Web2--Vol-z and take a snapshot of it, the disk size will be trippled in size, thus running out of disk space available.

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Error when trying to deploy Windows XP SP3 with WDS

    - by Nic Young
    I have created a WDS server running Windows Server 2008 R2. I have built my custom images of Windows 7 using WAIK and MDT 2010 that are installed on the server. I used this guide to help me through the process. The Windows 7 images that I have created capture and deploy properly. I am attempting to follow the same steps from the guide I linked to capture and deploy a Windows XP SP3 image. I am able to sysprep and capture the reference machine with no errors. I am then able to import the custom .wim that I just captured in to MDT 2010 with no issues either. However when I try to deploy this image to a test virtual machine I get the following error: Deployment Error: I have made sure that the .iso that I am importing the source files from originally to create the sysprep and capture sequence is indeed a Windows XP SP3 iso. When I first select a PE boot environment before I deploy I select the x86 PE boot image that I created originally when making this for my Windows 7 deployments. Could this be the issue? If so how do I make a boot image specific for Windows XP SP3 deployments? I have Googled around for this error and some places point to the deployment image not being able to find setup.exe and other important system files for installing the operating system. If so, how do I add these to the image? Any ideas?

    Read the article

  • Recursive move utility on Unix?

    - by Thomas Vander Stichele
    Sometimes I have two trees that used to have the same content, but have grown out of sync (because I moved disks around or whatever). A good example is a tree where I mirror upstream packages from Fedora. I want to merge those two trees again by moving all of the files from tree1 into tree2. Usually I do this with: rsync -arv tree1/* tree2 Then delete tree1. However, this takes an awful lot of time and disk space, and it would be much easier to be able to do: mv -r tree1/* tree2 In other words, a recursive move. It would be faster because first of all it would not even copy, just move the inodes, and second I wouldn't need a delete at the end. Does this exist ? As a test case, consider the following sequence of commands: $ mkdir -p a/b $ touch a/b/c1 $ rsync -arv a/ a2 sending incremental file list created directory ./ b/ b/c1 b/c2 sent 173 bytes received 57 bytes 460.00 bytes/sec total size is 0 speedup is 0.00 $ touch a/b/c2 What command would now have the effect of moving a/b/c2 to a2/b/c2 and then deleting the a subtree (since everything in it is already in the destination tree) ?

    Read the article

  • Sparc v440 unable 2 boot after recommended patch install

    - by user100660
    After installing the October 2011 recommended patch bundle on a Solaris 10 the host fails to boot. The output is {0} ok boot SC Alert: Host System has Reset screen not found. keyboard not found. Keyboard not present. Using ttya for input and output. Sun Fire V440, No Keyboard Copyright 1998-2003 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.10.10, 8192 MB memory installed, Serial #54744555. Ethernet address 0:3:ba:43:55:eb, Host ID: 834355eb. Rebooting with command: boot Boot device: /pci@1f,700000/scsi@2/disk@0,0:a File and args: \ Evaluating: Out of memory Warning: Fcode sequence resulted in a net stack depth change of 1 Evaluating: Evaluating: The file just loaded does not appear to be executable. {3} ok If I do a boot -F failsafe the host come up and I'm able to mount the root device (ufs on /dev/dsk/c1t0d0s0) and nothing appears broken, i.e I can see the logfiles from the patch install etc. Root device still have 1GB+ free. Only 2 kernel patches was installed from the patch bundle: 144500-19 & 147440-02. Any hints how to debug it further, etc.

    Read the article

  • How to bypass resume from hibernate

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • Find Search Replace from landmark to landmark - including everything in between

    - by Erick Tronboll
    Appreciate some Jedi help... I have the following string: gi|374638939|gb|AEZ55452.1| myosin light chain 2, partial [Batrachoseps major] AAMGR repeating sporadically throughout my document and want to remove everything from: gi|37463 to the AAMGR sequence but, I want to keep the blocks where JQ250 appears: gi|374638936|gb|*JQ250*332.1| Batrachoseps major isolate b voucher DBW5974 myosin light chain 2 gene, partial cds GCNGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCATGCAATGGGGGCGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACCTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT TCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAGAGACCCCAGTATGACGTCGTCATTGCTCC CAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC and remove only the lines that have AEZ554 gi|374638939|gb|*AEZ554*52.1| myosin light chain 2, partial [Batrachoseps major] AAMGR ..................................... So, ideally the following block: gi|374638934|gb|JQ250331.1| Batrachoseps major isolate a voucher DBW5974 myosin light chain 2 gene, partial cds GCNGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCATGCAATGGGGGCGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACCTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT TCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAGAGACCCCAGTATGACGTCGTCATTGCTCC CAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC gi|374638935|gb|AEZ55450.1| myosin light chain 2, partial [Batrachoseps major] AAMGR gi|374638936|gb|JQ250332.1| Batrachoseps major isolate b voucher DBW5974 myosin light chain 2 gene, partial cds GCNGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCATGCAATGGGGGCGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACCTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT TCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAGAGACCCCAGTATGACGTCGTCATTGCTCC CAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC gi|374638937|gb|AEZ55451.1| myosin light chain 2, partial [Batrachoseps major] AAMGR gi|374638938|gb|JQ250333.1| Batrachoseps major isolate a voucher MVZ:Herp:249023 myosin light chain 2 gene, partial cds GCCGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCCTGCAATGGGGGTGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACTTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT CCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAAGAGACCCCAGTATGACGTCGTCATTGCTC CCAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC gi|374638939|gb|AEZ55452.1| myosin light chain 2, partial [Batrachoseps major] AAMGR Would be left as just: gi|374638934|gb|JQ250331.1| Batrachoseps major isolate a voucher DBW5974 myosin light chain 2 gene, partial cds GCNGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCATGCAATGGGGGCGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACCTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT TCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAGAGACCCCAGTATGACGTCGTCATTGCTCC CAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC gi|374638936|gb|JQ250332.1| Batrachoseps major isolate b voucher DBW5974 myosin light chain 2 gene, partial cds GCNGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCATGCAATGGGGGCGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACCTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT TCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAGAGACCCCAGTATGACGTCGTCATTGCTCC CAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC gi|374638938|gb|JQ250333.1| Batrachoseps major isolate a voucher MVZ:Herp:249023 myosin light chain 2 gene, partial cds GCCGCCATGGGTAAGTGAACGCGCCGGACCAGACCATTCACTGCCTGCAATGGGGGTGTTTGTGGGTTGG AAGGTGTGCCAAAGATCTAGGGAACCCCAACTCCTCAGGATACGGGTGGGAGCCCTAAAATATGTCCAGC TATAAGGAGATGACCAATGGAAAAGGGGGTATCAGCAGTACTTTACTTGCTACTATAAGAGAATTGCATC CTGGGAATAGCCTCTGAAAGGTCCCATTTTAGCGACACTGGTAGATGGACACTGGCCTTTGGACAGCACC AGTAAGTAGAGCATTGCATCTTGGGATTCCTTTGCTGTTCACATGCCACTGAAAGCTCTCACCATAGCAG ATTCAAAATGCCTACCCGGCAGGTTGCCAGAAAAGCACTGCATCATGGGAGAACCACTTTTAGTGACAAT CCTAAGAGATGGGTGTCTCTCTGCCAGGCGCTATTATCCAAGAGACCCCAGTATGACGTCGTCATTGCTC CCAGGTAACCATGTTCTCACCCCCTCTCCCACAGGCCGC ................................many thanks as I help a struggling Grad Student

    Read the article

  • BES Express - configure MDS to push messages from 3rd party web application

    - by Max Gontar
    Hi! I have developed IIS web service to send PAP messages using Blackberry Push API over MDS. And there is an application installed on device, configured to receive push messages on appropriate port. Everything works well on MDS simulator. But it's not working well in real environment: I have installed BES Express and register several devices. I can browse MDS url with appropriate port, so url is correct. Also port enabled for reliable pushes is used in push message and in device application. Here is MDS simulator log: <2011-01-12 14:00:03.456 EET>:[272]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = PapServlet: request from 0:0:0:0:0:0:0:1 564 bytes...> <2011-01-12 14:00:03.476 EET>:[273]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Mapping PAP request to push request for pushID:pushID:asdas> <2011-01-12 14:00:03.479 EET>:[274]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = PushServlet: POST request from [UNKNOWN @ 0:0:0:0:0:0:0:1] to [PAPDEST=WAPPUSH%3D2100000A%253A100%2FTYPE%3DUSER%40rim.net&PORT=100&REQUESTURI=/] : -1 bytes...> <2011-01-12 14:00:03.480 EET>:[275]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = submitting push message with id:pushID:asdas> <2011-01-12 14:00:03.482 EET>:[276]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Executing push submit command for pushID:pushID:asdas> <2011-01-12 14:00:03.483 EET>:[278]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Pushing message to: 2100000a> <2011-01-12 14:00:03.484 EET>:[279]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Number of active push connections:1> <2011-01-12 14:00:03.489 EET>:[280]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = added server-initiated connection = -872546301, push id = pushID:asdas> <2011-01-12 14:00:03.491 EET>:[281]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Available threads in DefaultJobPool = 9 running JobRunner: DefaultJobRunner-7> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION => <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Transmission Line Section]:> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = POST / HTTP/1.1> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Headers Section]: 8 headers> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Parameters Section]: 3 parameters> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION => <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Transmission Line Section]:> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = POST / HTTP/1.1> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Headers Section]: 9 headers> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Parameters Section]: 3 parameters> <2011-01-12 14:00:03.501 EET>:[284]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Finished JobRunner: DefaultJobRunner-7, available threads in DefaultJobPool = 10, time spent = 8ms> <2011-01-12 14:00:03.521 EET>:[287]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = CreatedSendingQueue, DEVICEPIN = 2100000a> <2011-01-12 14:00:03.526 EET>:[290]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = Sending, TAG = 1288699908, DEVICEPIN = 2100000a, VERSION = 16, CONNECTIONID = -872546301, SEQUENCE = 0, TYPE = NOTIFY-REQUEST, CONNECTIONHANDLER = http, PROTOCOL = TCP, PARAMETERS = [MGONTAR/10.10.0.35:100], SIZE = 339> <2011-01-12 14:00:03.531 EET>:[291]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Number of active push connections:0> <2011-01-12 14:00:03.591 EET>:[292]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = Notification, TAG = 1288699908, STATE = DELIVERED> <2011-01-12 14:00:03.600 EET>:[296]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Device connections: AVG latency (msecs)79> <2011-01-12 14:00:03.600 EET>:[297]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, Removed push connection:-872546301> <2011-01-12 14:00:07.015 EET>:[298]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = RemovedSendingQueue, DEVICEPIN = 2100000a> And here is real MDS log: <2011-01-12 11:35:02.763 GMT>:[3932]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, PapServlet: request from 192.168.1.241 583 bytes...> <2011-01-12 11:35:02.897 GMT>:[3933]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Mapping PAP request to push request for pushID:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.909 GMT>:[3934]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, PushServlet: POST request from [UNKNOWN @ 192.168.1.241] to [PAPDEST=WAPPUSH%3D22D7F6BD%253A7874%2FTYPE%3DUSER%40rim.net&PORT=7874&REQUESTURI=/]> <2011-01-12 11:35:02.909 GMT>:[3934]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<push id: pushID:sdfsdfwerwer> <2011-01-12 11:35:02.910 GMT>:[3935]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, submitting push message with id:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.910 GMT>:[3936]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Executing push submit command for pushID:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.911 GMT>:[3937]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Pushing message to: 22d7f6bd> <2011-01-12 11:35:02.912 GMT>:[3938]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Number of active push connections:1> <2011-01-12 11:35:02.931 GMT>:[3939]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, added server-initiated connection = -1848311806, push id = pushID:sdfsdfwerwer> <2011-01-12 11:35:03.240 GMT>:[3940]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = CreatedSendingQueue, DEVICEPIN = 22d7f6bd, USERID = u3> <2011-01-12 11:35:03.241 GMT>:[3941]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = Sending, TAG = 536543251, DEVICEPIN = 22d7f6bd, USERID = u3, VERSION = 16, CONNECTIONID = -1848311806, SEQUENCE = 0, TYPE = NOTIFY-REQUEST, CONNECTIONHANDLER = http, PROTOCOL = TCP, PARAMETERS = [LDN-Server1/192.168.1.240:7874], SIZE = 383> <2011-01-12 11:35:03.241 GMT>:[3942]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Number of active push connections:0> <2011-01-12 11:35:03.253 GMT>:[3943]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Sending, VERSION = 1, COMMAND = SEND, TAG = 536543251, SIZE = 570> <2011-01-12 11:35:03.838 GMT>:[3944]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Receiving, VERSION = 1, COMMAND = STATUS, TAG = 536543251, SIZE = 10, STATE = DELIVERED> <2011-01-12 11:35:04.104 GMT>:[3945]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = Notification, TAG = 536543251, STATE = DELIVERED> <2011-01-12 11:35:04.121 GMT>:[3946]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Device connections: AVG latency (msecs)893> <2011-01-12 11:35:04.135 GMT>:[3947]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<INFO >:<LAYER = IPPP, DEVICEPIN = 22d7f6bd, DOMAINNAME = LDN-Server1/192.168.1.240, CONNECTION_TYPE = PUSH_CONN, ConnectionId = -1848311806, DURATION(ms) = 1151, MFH_KBytes = 0, MTH_KBytes = 0.374, MFH_PACKET_COUNT = 0, MTH_PACKET_COUNT = 1> <2011-01-12 11:35:04.144 GMT>:[3948]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, Removed push connection:-1848311806> <2011-01-12 11:35:09.264 GMT>:[3949]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = RemovedSendingQueue, DEVICEPIN = 22d7f6bd, USERID = u3> <2011-01-12 11:35:58.187 GMT>:[3950]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Sending, VERSION = 1, COMMAND = INFO, SIZE = 46> <2011-01-12 11:35:58.187 GMT>:[3951]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Sent health to S27700165[LDN-SERVER1:3200] Health=[0x 0000 0007 0000 0000],Mask=[0x 0000 0007 0000 0000],Load=[60]> As you can see, logs not really differs, message is marked as delivered. But my app on device not really gets this message (as it works in mds simulator) Please advice me, what may be wrong? Is there some certificate to install or security settings I should configure to make this push message came to device application? Thank you! same question on bbforums

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • General logging won't work in MySQL

    - by leonstr
    I saw on SF that there's an option in MySQL to log all queries. So, in my version (mysql-server-5.0.45-7.el5 on CentOS 5.2) this appears to be a case of enabling the 'log' option, so I edited /etc/my.cnf to add this: [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql old_passwords= log=/var/log/mysql-general.log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I then created the file and set permissions: # touch /var/log/mysql-general.log # chown mysql. /var/log/mysql-general.log # ls -l /var/log/mysql-general.log -rw-r--r-- 1 mysql mysql 0 Jan 18 15:22 /var/log/mysql-general.log But when I start mysqld I get: 120118 15:24:18 mysqld started ^G/usr/libexec/mysqld: File '/var/log/mysql-general.log' not found (Errcode: 13) 120118 15:24:18 [ERROR] Could not use /var/log/mysql-general.log for logging (error 13). Turning logging off for the whole duration of the MySQL server process. To turn it on again: fix the cause, shutdown the MySQL server and restart it. 120118 15:24:18 InnoDB: Started; log sequence number 0 182917764 120118 15:24:18 [Note] /usr/libexec/mysqld: ready for connections. Can anyone suggest why this isn't working?

    Read the article

  • Boot loop that I cannot bypass

    - by lonewaft
    Recently, on a laptop that I've used for a while, I had a strange issue where OS files were corrupted (device manager) and Windows 8 was hung after the login screen, so I reinstalled Windows 7 over the existing Windows 8 installation, and it worked for a couple days. Today, when I tried to use my laptop, it was stuck on a boot loop. Right after the BIOS screen, it would show a flashing underscore, then restart the computer, again and again until I removed the battery. I tried booting to a windows 7 install CD, but the same flashing underscore - reboot sequence happened when I tried. I tried moving the boot priority around (HDD first, CD/DVD first, even USB first) but nothing changed. After about an hour of tinkering with it, I listened to the HDD sounds, and it sounded like the HDD was trying to spin up, but failing (whining noise increasing in frequency that stopped and started in sync with the system restarting). I am planning to replace the HDD, but I'm still confused as to why a faulty HDD would stop the laptop from booting to my install DVD (tried it on a different computer, it booted from that CD fine). Anybody here have any idea why this might be happening?

    Read the article

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • Are web service handler chains possible under IIS / ASP.NET

    - by Mike
    I'm working with a client who wants me to implement a particular design in an IIS/ASP.NET environment. This design was already implemented in Java, but I am not sure it is possible using Microsoft technologies. In a Tomcat/Java environment one can create so call Handler Chains. In essence a handler runs on the server on which the web service is running and it intercepts the SOAP message coming to the web service. The handler can perform a number of tasks before passing control to the web service. Some of these tasks may refer to authentication and authorization. Moreover, one can create handler chains, such that the handlers can run in a particular sequence before passing control to the web service. This is a very elegant solution, as certain aspects of authentication and authorization can be automatically performed, without the developer of the client application and of the web service having to invest anything in it. The code for the client application and the web service is not affected. You may find a number of articles on internet on this subject by searching on Google for "web service handler chain". I performed searches for web service handlers in IIS or ASP.NET. I get some hits, but apparently handlers in IIS have another meaning than that described above. My question therefor is: Can handler chains (as available in Java and Tomcat) be created in IIS? If so, how (any article, book, forum...)? Either a negative or a positive answer will be greatly appreciated. Mike

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • New monitor connected to HDMI adaptor doesn't show output after booting

    - by Paul
    Hello out there in the multiple monitors’ world. I am a very old newbie in your world and need help. I just purchased a new Asus VH236H monitor and hooked it up the HDMI port of an ATI Radeon HD4300 / 4500 Series display adaptor. I left the old Princeton LCD19 (TMDS) hooked up to the DVI port of the same display adaptor. Both monitors displayed the boot sequence, after I fired good old Sarastro2 (Asus P5Q Pro Turbo – Dual Core E5300 – 2.60 GHz) up. The Asus lacked one half of a second behind the Princeton until the Windows 7 Ultimate SP 1 boot up was complete. Then the Asus displayed “HDMI NO SIGNAL” and went into hibernation. The Princeton stayed lit up as before. Both monitors are displayed on the “Screen Resolution Setup Display” and I plaid around with them for a while. The only thing I accomplished was to shove the desktop icons from the Princeton to the still hibernating Asus. The “Multiple displays:” is set to “Extend these displays”, the Orientation is “Landscape” and the Resolutions are set on both to the “recommended” one. Both monitors show that they work properly in the advanced Properties display. What am I doing wrong, what am I missing? Never mind the opinions about the different resolutions of the two monitors. I always can unhook the Princeton and give it to a Goodwill Store if I do not like the setup. I just would like to make it work. Any constructive help is very much appreciated, Thank you.

    Read the article

  • Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware

    - by Manish Palaparthy
    Oracle Enterprise Applications have been available with modern web browser based interfaces for a while now. The web browsers available in smart phones no longer require special markup language such as WML since the processing power of these handsets is quite near to that of a typical personal computer. Modern Mobile devices such as the IPhone, Android Phones, BlackBerry, Windows 8 devices can now render XHTML & HTML quite well. This means you could potentially use your mobile browser to access your favorite enterprise application. While the Mobile browser would render the UI, you might find it difficult to use it due to the formatting & Presentation of the Native UI. Smart phones offer a lot more than just a powerful web browser, they offer capabilities such as Maps, GPS, Multi touch, pinch zoom, accelerometers, vivid colors, camera with video, support for 3G, 4G networks, cloud storage, NFC, streaming media, tethering, voice based features, multi tasking, messaging, social networking web browsers with support for HTML 5 and many more features.  While the full potential of Enterprise Mobile Apps is yet to be realized, Oracle has published a few of its applications that take advantage of the above capabilities and are available for the IPhone natively. Here are some of them Iphone Apps  Oracle Business Approvals for Managers: Offers a highly intuitive user interface built as a native mobile application to conveniently access pending actions related to expenses, purchase requisitions, HR vacancies and job offers. You can even view BI reports related to the worklist actions. Works with Oracle E-Business Suite Oracle Business Indicators : Real-time secure access to OBI reports. Oracle Business Approvals for Sales Managers: Enables sales executives to review key targeted tasks, access relevant business intelligence reports. Works with Siebel CRM, Siebel Quote & Order Capture. Oracle Mobile Sales Assistant: CRM application that provides real-time, secure access to the information your sales organization needs, complete frequent tasks, collaborate with colleagues and customers. Works with Oracle CRMOracle Mobile Sales Forecast: Designed specifically for the mobile business user to view key opportunities. Works with Oracle CRM on demand Oracle iReceipts : Part of Oracle PeopleSoft Expenses, which allows users to create and submit expense lines for cash transactions in real-time. Works with Oracle PeopleSoft expenses Now, we have seen some mobile Apps that Oracle has published, I am sure you are intrigued as to how develop your own clients for the use-cases that you deem most fit. For that Oracle has ADF Mobile ADF Mobile You could develop Mobile Applications with the SDK available with the smart phone platforms!, but you'd really have to be a mobile ninja developer to develop apps with the rich user experience like the ones above. The challenges really multiply when you have to support multiple mobile devices. ADF Mobile framework is really handy to meet this challenge ADF Mobile can in be used to Develop Apps for the Mobile browser : An application built with ADF Mobile framework installs on a smart device, renders user interface via HTML5, and has access to device services. This means the programming model is primarily web-based, which offers consistency with other enterprise applications as well as easier migration to new platforms. Develop Apps for the Mobile Client (Native Apps): These applications have access to device services, enabling a richer experience for users than a browser alone can offer. ADF mobile enables rapid and declarative development of rich, on-device mobile applications. Developers only need to write an application once and then they can deploy the same application across multiple leading smart phone platforms. Oracle SOA Suite Although the Mobile users are using the smart phone apps, and actual transactions are being executed in the underlying app, there is lot of technical wizardry that is going under the surface. All of this key technical components to make 1. WebService calls 2. Authentication 3. Intercepting Webservice calls and adding security credentials to the request 4. Invoking the services of the enterprise application 5. Integrating with the Enterprise Application via the Adapter is all being implemented at the SOA infrastructure layer.  As you can see from the above diagram. The key pre-requisites to mobile enable an Enterprise application are The core enterprise application Oracle SOA Suite ADF Mobile

    Read the article

  • Give back full control to a user on a disk from another computer

    - by Foghorn
    I have my friend's hard drive mounted externally. After messing with the permissions with TAKEOWN so I could fix some viruses, I have full control over their drive. The problem is, now it's stuck in a "autochk not found" reboot sequence. I think the problem is that the boot sector is invisible to the drive now. So my question is, How can I use icacls to give back the full ownership, when the user I am giving it to is not on my machine? I ran the TAKEOWN command from my windows 7 laptop, their machine is a windows xp Professional with three partitions, I only altered the one that has the boot sector. Here is the permissions that icacls shows: (Where my computer is %System% my username is ME, and the drive is E:\ C:\Users\ME icacls E:\* E:\$RECYCLE.BIN %System%\ME:(OI)(CI)(F) Mandatory Label\Low Mandatory Level:(OI)(CI)(IO)(NW) E:\ALLDATAW %System%\ME:(I)(OI)(CI)(F) E:\alrt_200.data %System%\ME:(OI)(CI)(F) E:\AUTOEXEC.BAT %System%\ME:(OI)(CI)(F) E:\AZ Commercial %System%\ME:(I)(OI)(CI)(F) E:\boot.ini %System%\ME:(OI)(CI)(F) E:\Config.Msi %System%\ME:(I)(OI)(CI)(F) E:\CONFIG.SYS %System%\ME:(OI)(CI)(F) E:\Documents and Settings %System%\ME:(I)(OI)(CI)(F) E:\IO.SYS %System%\ME:(OI)(CI)(F) E:\Mitchell1 %System%\ME:(I)(OI)(CI)(F) E:\MSDOS.SYS %System%\ME:(OI)(CI)(F) E:\MSOCache %System%\ME:(I)(OI)(CI)(F) E:\NTDClient.log %System%\ME:(OI)(CI)(F) E:\NTDETECT.COM %System%\ME:(OI)(CI)(F) E:\ntldr %System%\ME:(OI)(CI)(F) E:\pagefile.sys %System%\ME:(OI)(CI)(F) E:\Program Files %System%\ME:(I)(OI)(CI)(F) E:\RECYCLER %System%\ME:(I)(OI)(CI)(F) E:\RHDSetup.log %System%\ME:(OI)(CI)(F) E:\System Volume Information %System%\ME:(I)(OI)(CI)(F) E:\WINDOWS %System%\ME:(I)(OI)(CI)(F) Successfully processed 22 files; Failed processing 0 files C:\Users\ME

    Read the article

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • How to bypass resume from hibernate [closed]

    - by Daniel Trebbien
    I am attempting to resume a Windows Vista laptop from hibernate, but the resume process seems to be stuck in an endless loop in which Windows is repeatedly trying to read from the optical drive. When I press the Power On button on the laptop, the screen is black (not even the backlight turns on) and the following occurs in a loop: Five seconds pass and I hear the optical drive being accessed. (There's no disk in the drive, so it sounds like a short buzzing noise.) Two seconds pass and I hear the optical drive being accessed. Two seconds pass and I hear the optical drive being accessed. So it's three short buzzing noises in a row, over and over again. Eventually I have to abruptly power off the machine. I have tried inserting a data CD into the drive as well as a bootable CD (a live Linux distro boot disk). For both, the optical drive spins up for a bit, but stops after Windows decides that the disk is not what it is looking for. I have since lost the Windows Vista recovery DVD, but I don't know if inserting the recovery disk into the optical drive would have a different effect than the bootable CD. I have tried pressing F8 immediately after pressing the Power On button (hoping to enter System Restore), but that did not have an effect. Is there a special key sequence that will cause Windows to bypass resuming from hibernate, effectively ignoring hiberfil.sys?

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by ken.pulverman
      ERP has been the backbone of enterprise software.  The data held in your ERP system is core of most companies.  Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years.  Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor.   Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough."  So your ERP is working.  It's humming along.  You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere.   Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on.  And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™.   SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications.  Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise.  In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure the Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution.  It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do.  The diagram below reflects that change.    In this model, the hegemony of ERP is over.  It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information.  And through modern middleware it will connect to everything.  So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • SSD, AHCI and write performance

    - by Dan
    We've started to deploy SSD drives to our developers workstations. At this moment we're having the unpleasant surprise that the systems using the new SSDs often freeze, with the HDD activity led blinking or being continuously on. Benchmarks shows read speeds around 180 MB/s, but write speeds around 5 MB/s. All developers are using Windows 7 Enterprise, 64 bit, SP1. One of our developers suggested (based on his experience) the following sequence: backup the workstation use a tool to completely erase the SSD make sure AHCI is enabled in BIOS install Windows restore from backup So far, this procedure seems to work (we're still testing, but write speed seems to be 120 MB/s). There are some questions in this context: why do we have to completely reinstall Windows? Is it possible to clean the SSD without reinstalling Windows? Is there a reliable tool? If AHCI was disabled when Windows was installed and we enable it, shouldn't this be enough to correct the write performance issue? If we have to completely erase the SSDs, does this mean the SSDs we've received were used before (SH)? I'm wondering this because the package I've got was open (I didn't think about it at that time, as I considered one of my coworkers simply took a peek inside the package). Has anyone seen a similar problem before?

    Read the article

  • How can I disable the CTRL-ALT-DEL key combination completely on XP/Vista/7?

    - by Travesty3
    I have been googling extensively to figure this out, and nobody seems to be able to give a direct answer. Let me start by saying that I'm NOT talking about requiring CTRL-ALT-DEL to enter logon information. I'm working on a golf simulator program which is used at golf centers. I need the ability to completely disable the CTRL-ALT-DEL key sequence so that the golf center customers can't get out of the program and access the computer at all. I realize there are other key combinations that need to be handled as well, we already have this entire feature working in XP, but we're going to be switching to Windows 7 soon, and CTRL-ALT-DEL is the only one that doesn't seem to work in Win7. I'd really like an all-around solution if at all possible. This same program may also be installed on a client's personal computer for an in-home golf simulator, but the computers that really need this feature (golf center computers) are provided to the golf center by us, so would the best option be to write a new shell? I don't know anything about that at all, other than others that suggest writing a new shell for kiosk mode. I'd really like a simpler option, like modifying the registry in some way. I have heard that you can remove some buttons from the menu screen that pops up, but unless I can remove pretty much all of them (including the shutdown/restart button in the bottom-right corner), this won't be enough of a solution for me. Thanks for taking the time to read this and thanks again for any help you could provide! -Travis

    Read the article

  • Why my hard disk can't boot from the BIOS?

    - by Mario
    I installed a new sata DVD burner. When I turned on the machine (windows 7) it didn't boot. It can boot from a lubuntu CD. There is an option on lubuntu to boot form the first hard disk. If I select it, the machine boots normally to windows 7. So from the CD I can boot but not from the BIOS. I checked all the options more than once: boot from HD, not boot from removable, boot from USB, boot from optical. The order of the boot sequence is HD then DVD. I tried booting only with the HD; I disconnected both DVDs. I even tried recovery of the MBR: bootsect, bootrec, fixmbr, buildbcd, nt60, etc. So, the question is, does this have a reason, what's the difference between booting from the BIOS (as I think) to from the DVD?. The BIOS is intel, it has BIOS codes on the right bottom corner, it stays at 5A for a while. 5A is "Resetting PATA/SATA bus and all devices".

    Read the article

  • Best practices for settings for Oracle database creation

    - by Gary
    When installing an Oracle Database, what non-default settings would you normally apply (or consider applying) ? I'm not after hardware dependent setting (eg memory allocation) or file locations, but more general items. Similarly anything that is a particular requirement for a specific application rather than generally applicable isn't really useful. Do you separate out code/API schemas (PL/SQL owners) from data schemes (table owners) ? Do you use default or non-default roles, and if the latter, do you password protect the role ? I'm also interested in whether there's any places where you do a REVOKE of a GRANT that is installed by default. That may be version dependent as 11g seems more locked down for its default install. These are ones I used in a recent setup. I'd like to know whether I missed anything or where you disagree (and why). Database Parameters Auditing (AUDIT_TRAIL to DB and AUDIT_SYS_OPERATIONS to YES) DB_BLOCK_CHECKSUM and DB_BLOCK_CHECKING (both to FULL) GLOBAL_NAMES to true OPEN_LINKS to 0 (did not expect them to be used in this environment) Character set - AL32UTF8 Profiles I created an amended password verify function that used the apex dictionary table (FLOWS_030000.wwv_flow_dictionary$) as an extra check to prevent simple passwords. Developer logins CREATE PROFILE profile_dev LIMIT FAILED_LOGIN_ATTEMPTS 8 PASSWORD_LIFE_TIME 32 PASSWORD_REUSE_TIME 366 PASSWORD_REUSE_MAX 12 PASSWORD_LOCK_TIME 6 PASSWORD_GRACE_TIME 8 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME 1080 IDLE_TIME 180 LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Application login CREATE PROFILE profile_app LIMIT FAILED_LOGIN_ATTEMPTS 3 PASSWORD_LIFE_TIME 999 PASSWORD_REUSE_TIME 999 PASSWORD_REUSE_MAX 1 PASSWORD_LOCK_TIME 999 PASSWORD_GRACE_TIME 999 PASSWORD_VERIFY_FUNCTION verify_function_11g SESSIONS_PER_USER unlimited CPU_PER_SESSION unlimited CPU_PER_CALL unlimited PRIVATE_SGA unlimited CONNECT_TIME unlimited IDLE_TIME unlimited LOGICAL_READS_PER_SESSION unlimited LOGICAL_READS_PER_CALL unlimited; Privileges for a standard schema owner account CREATE CLUSTER CREATE TYPE CREATE TABLE CREATE VIEW CREATE PROCEDURE CREATE JOB CREATE MATERIALIZED VIEW CREATE SEQUENCE CREATE SYNONYM CREATE TRIGGER

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >