Search Results

Search found 17194 results on 688 pages for 'document databases'.

Page 536/688 | < Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >

  • IIS 7&rsquo;s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • Best Practices for Building a Virtualized SPARC Computing Environment

    - by Scott Elvington
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle just published Best Practices for Building a Virtualized SPARC Computing Environment, a white paper that provides guidance on the complete hardware and software stack for deploying and managing your physical and virtual SPARC infrastructure. The solution is based on Oracle SPARC T4 servers, Oracle Solaris 11 with Oracle VM for SPARC 2.2, Sun ZFS storage appliances, Sun 10GbE 72 port switches and Oracle Enterprise Manager Ops Center 12c. The paper emphasizes the value and importance of planning the resources (compute, network and storage) that will comprise the virtualized environment to achieve the desired capacity, performance and availability characteristics. The document also details numerous operational best practices that will help you deliver on those characteristics with unique capabilities provided by Enterprise Manager Ops Center including policy-based guest placement, pool resource balancing and automated guest recovery in the event of server failure. Plenty of references to supplementary documentation are included to help point you to additional resources. Whether you’re building the first stages of your private cloud or a general-purpose virtualized SPARC computing environment, these documented best practices will help ensure success. Please join Phil Bullinger and Steve Wilson from Oracle to learn more about breakthrough efficiency in private cloud infrastructure and how SPARC based virtualization can help you get started on your cloud journey. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • SQL SERVER – Best Reference – Wait Type – Day 27 of 28

    - by pinaldave
    I have great learning experience to write my article series on Extended Event. This was truly learning experience where I have learned way more than I would have learned otherwise. Besides my blog series there was excellent quality reference available on internet which one can use to learn this subject further. Here is the list of resources (in no particular order): sys.dm_os_wait_stats (Book OnLine) – This is excellent beginning point and official documentations on the wait types description. SQL Server Best Practices Article by Tom Davidson – I think this document goes without saying the BEST reference available on this subject. Performance Tuning with Wait Statistics by Joe Sack – One of the best slide deck available on this subject. It covers many real world scenarios. Wait statistics, or please tell me where it hurts by Paul Randal – Notes from real world from SQL Server Skilled Master Paul Randal. The SQL Server Wait Type Repository… by Bob Ward – A thorough article on wait types and its resolution. A MUST read. Tracking Session and Statement Level Waits by by Jonathan Kehayias – A unique article on the subject where wait stats and extended events are together. Wait Stats Introductory References By Jimmy May – Excellent collection of the reference links. Great Resource On SQL Server Wait Types by Glenn Berry – A perfect DMV to find top wait stats. Performance Blog by Idera – In depth article on top of the wait statistics in community. I have listed all the reference I have found in no particular order. If I have missed any good reference, please leave a comment and I will add the reference in the list. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Tracking Session and Statement Level Waits Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • Umbraco Permissions Script - Secure Version

    - by Vizioz Limited
    Back in May I blogged about how to set Permissions for Umbraco using SetACL to set the appropriate directory permissions based on the installation recommendations.Recently I have been working on a site for a client who wanted every security item to be locked down as tightly as possible. And so I modified the script based on the Umbraco security best practices, I thought I'd share it with everyone, if I have missed anything, or if anyone has any suggestions on how to improve this, please let me know :)Please refer to my previous post regarding the SetAcl command line application that you will need.I suggest you save the following into a batch file called: umbPermSecure.batecho offREM Script to setup the Security Permissions for an Umbraco siteREM This script will give your machine Network Service the minimum rights requiredREM for Umbraco to workREM I suggest you update this script to also remove any users who do not need REM access to the web foldersREM **** Pre-requisites ****REM You will need to download - http://setacl.sourceforge.net/REM It is assumed that you have stored SetACL in a directory called, C:\SetACL ifREM not, you will need to modify the script.REM **** Usage ****REM You need to pass in the path for the root of your Umbraco directoryREM E.g. umbPermSecure.bat C:\inetpub\umbracoroot@echo umbPermSecure.bat - Script to set Umbraco File and Directory Permissions@echo based on the Umbraco Security Best Practices Document (13th March 2009)@echo Published by Chris Houston - 19th October 2009@echo http://blog.vizioz.com@echo Adding READ only access SetACL.exe -on "%1" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\web.config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\bin" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\umbraco" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ and EXECUTE access SetACL.exe -on "%1\app_code" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\usercontrols" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read_ex" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"@echo Adding READ, WRITE and MODIFY access SetACL.exe -on "%1\config" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\css" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\data" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\masterpages" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\media" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\python" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\scripts" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"SetACL.exe -on "%1\xslt" -ot file -actn ace -ace "n:%computername%\NETWORK SERVICE;p:read" -ace "n:%computername%\NETWORK SERVICE;p:change" -actn clear -clr "dacl,sacl" -log "c:\setacl\log.txt"

    Read the article

  • Deployment Options for AutoVue 20.0 Users

    - by celine.beck
    AutoVue release 20.0 boasts a brand new architecture. As part of this product rearchitecture, AutoVue can now be deployed either as a desktop deployment to serve the needs of individual users in their personal productivity; or in a Client / Server deployment for those that require connections to enterprise applications / back-end systems. The most common question that we hear from our customers about this new architecture is the following: "Is AutoVue Desktop Version still part of release 20.0 and if so, what is the difference between AutoVue Desktop Version and the Desktop deployment of AutoVue release 20.0?" A detailed answer to these questions is provided in a very complete article entitled Understanding Deployment Options for AutoVue 19.3 Desktop Version users upgrading to AutoVue 20.0 (note 1058254.1) which was posted on My Oracle Support. Is AutoVue Desktop Version still part of AutoVue 20.0? Yes, AutoVue Desktop Version 20.0 is still available to customers and partners, as a maintenance release of AutoVue 19.3. As such, it will not contain any of the new capabilities featured in AutoVue release 20.0. All format enhancements and new format support have been added to release 20.0 Desktop Version though. What is the different between AutoVue Desktop Version 20.0 and the Desktop Deployment of AutoVue release 20.0? AutoVue 20.0 Desktop deployment works like the AutoVue Desktop version. It is installed as a standalone product on each user's machine and runs a local instance of AutoVue. The AutoVue 20.0 Desktop deployment includes all new features, formats and performance enhancements included in release 20.0 (walkthrough capability, improved compare, ...) What deployment options are available to AutoVue 19.3 Desktop Version customers? AutoVue Desktop Version users can evolve at their own pace to the new AutoVue platform. With release 20.0, customers can opt to: Option 1: Stay on AutoVue Desktop Version 20.0 Option 2: Migrate to AutoVue and select the desktop deployment method Option 3: Migrate to AutoVue and select the Client/Server deployment method What is the Client / Server deployment of AutoVue 20.0? The Client/Server deployment has AutoVue installed on a server, to which local client machines connect to access and view documents. AutoVue 20.0 Client Server Deployment allows users to leverage the new online/offline capabilities in release 20.0 and easily switch between online and offline modes of operation. With the Client/Server deployment, customers also get a complete, open and standards-based set of integration tools that allows them to tie AutoVue to any enterprise applications to provide users with a consistent view of data and business objects and expand workflow automation to document-based processes. Related articles: AutoVue Release 20.0 Now Available, New Walkthrough Capability in AutoVue 20.0, Watch the AutoVue 20.0 Release Webcast, April 27 at 12pm EST

    Read the article

  • Add Enhanced Balloon Tooltips to Firefox

    - by Asian Angel
    The default balloon tooltip in Firefox does well at times but then there are instances when a person finds that more information would be much better. The Tooltip Plus extension for Firefox will give your browser that nice extra information boost. Before & After For our example we have placed the “before & after shots” together for better comparison. First off we started with the How-To Geek logo. Note: Does not display the original URL behind shortened URLs. Next we moved on to a permanently linked article title. The “Reviews Tab” in the How-To Geek website toolbar. The article tags listing just beneath the HTG website toolbar. And the link for subscribing to our RSS Feed. In each instance you could actually see the address behind the links. The Tooltip Plus extension will also help out with images in webpages (including “Alt Text” if present). Notice that the link for the image is now available for you to view. Options The options are extremely simple to work with. Decide if you want a document icon to display, the size of the icon, and if you would like “Alt Text” for images to be displayed or not. Conclusion The Tooltip Plus extension does one thing and does it very well…it gives you that extra bit of information when you need it. Links Download the Tooltip Plus extension (Mozilla Add-ons) Similar Articles Productive Geek Tips How To Fix System Tray Tooltips Not Displaying in Windows XPStop the Annoying "There are unused icons on your desktop" Popup BalloonThe Illustrated Guide to the New Firefox 3.6 Windows 7 IntegrationView URLs as Tooltips in FirefoxDisable the Annoying “This device can perform faster” Balloon Message in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio

    Read the article

  • links for 2010-04-28

    - by Bob Rhubart
    Guido Schmutz: Oracle BPM11g available! Oracle ACE Director Guido Schmutz shares his impressions after attending a hands-on workshop conducted by Masons of SOA member Clemens Utschig-Utschig. (tags: oracle otn oracleace bpm soa soasuite) Elena Zannoni : 2010 Collaboration Summit Impressions Elena Zannoni has collected her thoughts on #C10 and shares them in this great blog post. (tags: oracle otn linux architecture collaborate2010) Hajo Normann: BPMN 2.0 in Oracle BPM Suite: The future of BPM starts now "The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends." -- Oracle ACE Director and Masons of SOA member Hajo Normann (tags: oracle otn oracleace bpm soa sca) Brain Dirking: AIIM Best Practice Awards to Two Oracle Customers Brian Dirking's great write-up of the AIIM Awards Banquet, at which the Bureau of Indian Affairs and the Charles Town Police Department were among the winners of the 2010 Carl E. Nelson Best Practices Awards. (tags: oracle otn aiim bpm ecm enterprise2.0) Mark Wilcox: Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Directory Services Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time (tags: oracle otn webcast security identitymanagement) Celine Beck: Introducing AutoVue Document Print Service Celine Beck offers a detailed overview of Oracle AutoVue. (tags: oracle otn enatarch visualization printing) Vikas Jain: What's new in OWSM 11gR1 PS2 (11.1.1.3.0) ? Vikas Jain shares links to resources relevant to the recently releases patch set for Oracle Web Services Manager 11gR1. (tags: oracle otn soa webservices oswm) @theovanarem: Oracle SOA Suite 11g Release 1 Patch Set 2 Theo Van Arem shares links to several resources relevant to the release of the latest patch set for Oracle SOA Suite 11g. (tags: oracle otn soa soasuite middleware) @vambenepe: Analyzing the VMforce announcement "The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it." -- William Vambenepe (tags: oracle otn cloud paas)

    Read the article

  • DevConnections jQuery Session Slides and Samples posted

    - by Rick Strahl
    I’ve posted all of my slides and samples from the DevConnections VS 2010 Launch event last week in Vegas. All three sessions are contained in a single zip file which contains all slide decks and samples in one place: www.west-wind.com/files/conferences/jquery.zip There were 3 separate sessions: Using jQuery with ASP.NET Starting with an overview of jQuery client features via many short and fun examples, you'll find out about core features like the power of selectors to select document elements, manipulate these elements with jQuery's wrapped set methods in a browser independent way, how to hook up and handle events easily and generally apply concepts of unobtrusive JavaScript principles to client scripting. The session also covers AJAX interaction between jQuery and the .NET server side code using several different approaches including sending HTML and JSON data and how to avoid user interface duplication by using client side templating. This session relies heavily on live examples and walk-throughs. jQuery Extensibility and Integration with ASP.NET Server Controls One of the great strengths of the jQuery Javascript framework is its simple, yet powerful extensibility model that has resulted in an explosion of plug-ins available for jQuery. You need it - chances are there's a plug-in for it! In this session we'll look at a few plug-ins to demonstrate the power of the jQuery plug-in model before diving in and creating our own custom jQuery plug-ins. We'll look at how to create a plug-in from scratch as well as discussing when it makes sense to do so. Once you have a plug-in it can also be useful to integrate it more seamlessly with ASP.NET by creating server controls that coordinate both server side and jQuery client side behavior. I'll demonstrate a host of custom components that utilize a combination of client side jQuery functionality and server side ASP.NET server controls that provide smooth integration in the user interface development process. This topic focuses on component development both for pure client side plug-ins and mixed mode controls. jQuery Tips and Tricks This session was kind of a last minute substitution for an ASP.NET AJAX talk. Nothing too radical here :-), but I focused on things that have been most productive for myself. Look at the slide deck for individual points and some of the specific samples.   It was interesting to see that unlike in previous conferences this time around all the session were fairly packed – interest in jQuery is definitely getting more pronounced especially with microsoft’s recent announcement of focusing on jQuery integration rather than continuing on the path of ASP.NET AJAX – which is a welcome change. Most of the samples also use the West Wind Web & Ajax Toolkit and the support tools contained within it – a snapshot version of the toolkit is included in the samples download. Specicifically a number of the samples use functionality in the ww.jquery.js support file which contains a fairly large set of plug-ins and helper functionality – most of these pieces while contained in the single file are self-contained and can be lifted out of this file (several people asked). Hopefully you'll find something useful in these slides and samples.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  jQuery  

    Read the article

  • Innovation, Adaptability and Agility Emerge As Common Themes at ACORD LOMA Insurance Forum

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance is blogging from the show floor of the ACORD LOMA Insurance Forum this week. Sessions at the ACORD LOMA Insurance Forum this week highlighted the need for insurance companies to think creatively and be innovative with their technology in order to adapt to continuously shifting market dynamics and drive business efficiency and agility.  LOMA President & CEO Robert Kerzner kicked off the day on Tuesday, citing how the recent downtown and recovery has impacted the insurance industry and the ways that companies are doing business.  He encouraged carriers to look for new ways to deliver solutions and offer a better service experience for consumers.  ACORD President & CEO Gregory Maciag reinforced Kerzner's remarks, noting how the industry's approach to technology and development of industry standards has evolved over the association's 40-year history and cited how the continued rise of mobile computing will change the way many carriers are doing business today and in the future. Drawing from his own experiences, popular keynote speaker and Apple Co-Founder Steve Wozniak continued this theme, delving into ways that insurers can unite business with technology.  "iWoz" encouraged insurers to foster an entrepreneurial mindset in a corporate environment to create a culture of creativity and innovation.  He noted that true innovation in business comes from those who have a passion for what they do.  Innovation was also a common theme in several sessions throughout the day with topics ranging from modernization of core systems, automated underwriting, distribution management, CRM and customer communications management.  It was evident that insurers have begun to move past the "old school" processes and systems that constrain agility, implementing new process models and modern technology to become nimble and more adaptive to the market.   Oracle Insurance executives shared a few examples of how insurers are achieving innovation during our Platinum Sponsor session, "Adaptive System Transformation:  Making Agility More Than a Buzzword." Oracle Insurance Senior Vice President and General Manager Don Russo was joined by Chuck Johnston, vice president, global strategy and alliances, and Srini Venkatasantham, vice president of product strategy.  The three shared how Oracle's adaptive solutions for insurance, with a focus on how the key pillars of an adaptive systems - configurable applications, accessible information, extensible content and flexible process - have helped insurers respond rapidly, perform effectively and win more business. Insurers looking to innovate their business with adaptive insurance solutions including policy administration, business intelligence, enterprise document automation, rating and underwriting, claims, CRM and more stopped by the Oracle Insurance booth on the exhibit floor.  It was a premiere destination for many participating in the exhibit hall tours conducted throughout the day. Finally, red was definitely the color of the evening at the Oracle Insurance "Red Hot" customer celebration at the House of Blues. The event provided a great opportunity for our customers to come together and network with the Oracle Insurance team and their peers in the industry.  We look forward to visiting more with of our customers and making new connections today. Helen Pitts is senior product marketing manager for Oracle Insurance. 

    Read the article

  • Access Windows Home Server from an Ubuntu Computer on your Network

    - by Mysticgeek
    If you’re a Windows Home Server user, there may be times when you need to access it from an Ubuntu machine on your network. Today we take a look at the process of accessing files on your home server from Ubuntu. Note: In this example we’re using Windows Home Server with PowerPack 3, and Ubuntu 10.04 running on a home network. Access WHS from Ubuntu To access files on your home server from Ubuntu, click on Places then select Network. You should now see your home server listed in the Network folder as well as other Windows machines…double-click the server to access it. If you don’t see your server listed, you might need to go into Windows Network \ Workgroup and find it there. You’ll be prompted to enter in the correct credentials for WHS just as you would when accessing it from a Windows machine. It’s your choice if you want to have the password remembered or not…make your selection and click Connect. Now you will see the available folders on your home server. In this example we signed in with Administrator credentials, so we have access to everything. Double-click on the folder share you want to access content from…here we see MS Office documents on the server. Or, here we take a look at a music folder with various MP3 files which you can make Ubuntu play. You can access the files directly from the server, provided there is a Linux app that can handle the file type. In this example we opened a Word document in OpenOffice. Here we’re playing an MKV movie file from the server in Totem Movie Player.   You can easily search for files on the server as well… If you want to store your Ubuntu files on WHS it’s just a matter of dragging them to the correct WHS folder you want them in. If you’re using an Ubuntu computer on your home network and need to access files from Windows Home Server, luckily it’s a straight-forward process. You’ll often have to find the correct software to use Windows files, but even that’s getting much easier with version 10.04. Similar Articles Productive Geek Tips Share Ubuntu Home Directories using SambaCreate a Samba User on UbuntuGMedia Blog: Setting Up a Windows Home ServerRestore Files from Backups on Windows Home ServerInstall Samba Server on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Speed Up Windows With ReadyBoost Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets

    Read the article

  • Improving CSS With .LESS

    Cascading Style Sheets, or CSS, is a syntax used to describe the look and feel of the elements in a web page. CSS allows a web developer to separate the document content - the HTML, text, and images - from the presentation of that content. Such separation makes the markup in a page easier to read, understand, and update; it can result in reduced bandwidth as the style information can be specified in a separate file and cached by the browser; and makes site-wide changes easier to apply. For a great example of the flexibility and power of CSS, check out CSS Zen Garden. This website has a single page with fixed markup, but allows web developers from around the world to submit CSS rules to define alternate presentation information. Unfortunately, certain aspects of CSS's syntax leave a bit to be desired. Many style sheets include repeated styling information because CSS does not allow the use of variables. Such repetition makes the resulting style sheet lengthier and harder to read; it results in more rules that need to be changed when the website is redesigned to use a new primary color. Specifying inherited CSS rules, such as indicating that a elements (i.e., hyperlinks) in h1 elements should not be underlined, requires creating a single selector name, like h1 a. Ideally, CSS would allow for nested rules, enabling you to define the a rules directly within the h1 rules. .LESS is a free, open-source port of Ruby's LESS library. LESS (and .LESS, by extension) is a parser that allows web developers to create style sheets using new and improved language features, including variables, operations, mixins, and nested rules. Behind the scenes, .LESS converts the enhanced CSS rules into standard CSS rules. This conversion can happen automatically and on-demand through the use of an HTTP Handler, or done manually as part of the build process. Moreover, .LESS can be configured to automatically minify the resulting CSS, saving bandwidth and making the end user's experience a snappier one. This article shows how to get started using .LESS in your ASP.NET websites. Read on to learn more! Read More >

    Read the article

  • How to create Adhoc workflow in UCM

    - by vijaykumar.yenne
    UCM has an inbuilt workflow engine that can handle document centric workflow approval/rejection process to ensure the right set of assets go into the repository. Anybody who has gone through the documentation is aware that there are two types of work flows that can be defined using the Workflow Admin applet in UCM namely Criteria and Basic While criteria is an Automatic workflow  process based on certain metadata attributes (Security Group and One of the Metadata Fields) , basic workflow is a manual workflow that need to be initiated by the admin. Any workflow  that can be put on the white board can be translated into the UCM wokflow process and there are concepts like sub workflows, tokens, events. idoc scripting that be introduced to handle any kind of complex workflows. There is a specific Workflow Implementation guide that explains the concepts in detail. One of the standard queries i come across is how to handle adhoc workflows where at the time of contributing the content, the contributors would like to decide on the workflow to be initiated and the users to be picked for approval in each step, hence this post.This is what i want to acheive, i would like to display on my Checkin Screen on the kind of workflows that a contributor could choose from:Based on the Workflow the contributor chooses, the other metadata fields (Step One, Step Two and Step Three)  need to be filled in and these fields decide who the approvers are going to be.1. Create a criteria workflow called One_Step_Review2.create two tokens StepOne <$wfAddUser(xWorkflowStepOne, "user")$>,  OrginalAuthor  <$wfAddUser(wfGet("OriginalAuthor"), "user")$>View image3.create two steps in the work flow created (One_Step_Review)View image4. Edit Step1 of the Workflow and add the Step One token and select the review permissionView image5. In the exit conditions tab have atleast One reveiwerView image6. In the events tab add an entry event <$wfSet("OriginalAuthor",dDocAuthor)$> to capture the contributor who shall be notified in the second step of the workflowView image7. Add the second step Notify_Author to the workflow8. Add the original author token to the above step9.  Enable the workflow10. Open the configration manager applet and create a Metadata field Workflow with option list enabled and add the list of values as show hereView image11. Create another metadata field WorkflowStepOne with option list configured to the Users View. This shall display all the users registered with UCM, which when selected shall be associated with the tokens associated with the workflow. Refer the above token.View imageAs indicated in the above steps you could create multiple work flows and associate the custom metadata field values to the tokens so that the contributors can decide who can approve their  content.

    Read the article

  • Creating a bare bone web-browser: After the html parser, javascript parser, etc have done their work, how do I display the content of the webpage?

    - by aste123
    This is a personal project to learn computer programming. I took a look at this: https://www.udacity.com/course/viewer#!/c-cs262 The following is the approach taken in it: Abstract Syntax Tree is created. But javascript is still not completely broken down in order not to confuse with the html tags. Then the javascript interpreter is called on it. Javascript interpreter stores the text from the write() and document.write() to be used later. Then a graphics library in Python is called which will convert everything to a pdf file and then we convert it into png or jpeg and then display it. My Question: I want to display the actual text in a window (which I will design later) like firefox or chrome does instead of image files so that the data can be selected, copied, etc by the user of the browser. How do I accomplish this? In other words, what are the other elements of a bare bone web browser that I am missing? I would prefer to implement most of the stuff in C++ although if things seem too complicated I might go with Python to save time and create a prototype and later creating another bare bone browser in C++ and add more features. This is a project to learn more. I do realize we already have lots of reliable browsers like firefox, etc. The way I feel it is done: I think after all the broken down contents have been created by the parsers and interpreters, I will need to access them individually from within the window's code (like qt) and then decide upon a good way to display them. I am not sure if it is the way this should be done. Additions after useful comment by Kilian Foth: I found this page: http://friendlybit.com/css/rendering-a-web-page-step-by-step/ 14. A DOM tree is built out of the broken HTML 15. New requests are made to the server for each new resource that is found in the HTML source (typically images, style sheets, and JavaScript files). Go back to step 3 and repeat for each resource. 16. Stylesheets are parsed, and the rendering information in each gets attached to the matching node in the DOM tree 17. Javascript is parsed and executed, and DOM nodes are moved and style information is updated accordingly 18. The browser renders the page on the screen according to the DOM tree and the style information for each node 19. You see the page on the screen I need help with step 18. How do I do that? How much work do Webkit and Gecko do? I want to use a readymade layout renderer for step number 18 and not for anything that comes before that.

    Read the article

  • How to internally rewrite a page when requested from specific HTTP_HOST

    - by Andy
    Hi all, I have a Drupal site, site.com, and our client has a campaign that they're promoting for which they've bought a new domain name, campaign.com. I'd like it so that a request for campaign.com internally rewrites to a particular page of the Drupal site. Note Drupal uses an .htaccess file in the document root. The normal Drupal rewrite is # Rewrite URLs of the form 'x' to the form 'index.php?q=x'. RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] I added the following before the normal rewrite. # Custom URLS (eg. microsites) go here RewriteCond %{HTTP_HOST} =campaign.com RewriteCond %{REQUEST_URI} =/ RewriteRule ^ index.php?q=node/22 [L] Unfortunately it doesn't work, it just shows the homepage. Turning on the rewrite log I get this. 1. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/ - 2. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] applying pattern '^' to uri '' 3. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] rewrite '' - 'index.php?q=node/22' 4. [rid#2da8ea8/initial] (3) split uri=index.php?q=node/22 - uri=index.php, args=q=node/22 5. [rid#2da8ea8/initial] (3) [perdir D:/wamp/www/] add per-dir prefix: index.php - D:/wamp/www/index.php 6. [rid#2da8ea8/initial] (2) [perdir D:/wamp/www/] strip document_root prefix: D:/wamp/www/index.php - /index.php 7. [rid#2da8ea8/initial] (1) [perdir D:/wamp/www/] internal redirect with /index.php [INTERNAL REDIRECT] 8. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 9. [rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^' to uri 'index.php' 10.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] strip per-dir prefix: D:/wamp/www/index.php - index.php 11.[rid#2da7770/initial/redir#1] (3) [perdir D:/wamp/www/] applying pattern '^(.*)$' to uri 'index.php' 12.[rid#2da7770/initial/redir#1] (1) [perdir D:/wamp/www/] pass through D:/wamp/www/index.php I'm not used to mod_rewrite, so I might be missing something, but comparing the logs from a call to http://site.com/node/3 and from http://campaign.com/ I can't see any meaningful difference. Specifically uri and args on line 4 seem correct, the internal redirect on line 7 seems right, and the pass through on line 12 seems right (because the file index.php exists). But for some reason it seems the query string's been discarded/ignored around the time of the internal redirect. I'm completely stumped. Also, if anyone could provide a reference on understanding the rewrite log, that might help. It'd be great if there's a way to track the query string through the internal redirect. FWIW I'm using WampServer 2.1 with Apache 2.2.17.

    Read the article

  • Tron: Legacy, 3D goggles, and embedded UA

    - by Roger Hart
    The 3D edition of Tron: Legacy opens with embedded user assistance. The film starts with an iconic white-on-black command-prompt message exhorting viewers to keep their 3D glasses on throughout. I can't quote it verbatim, and at the time of writing nor could anybody findable with 5 minutes of googling. But it was something like: "Although parts of the movie are 2D, it was shot in 3D, and glasses should be worn at all times. This is how it was intended to be viewed" Yeah - "intended". That part is verbatim. Wow. Now, I appreciate that even out of the small sub-set of readers who care a rat's ass for critical theory, few will be quite so gung-ho for the whole "death of the author" shtick as I tend to be. And yes, this is ergonomic rather than interpretive, but really - telling an audience how you expect them to watch a movie? That's up there with Big Steve's "you're holding it wrong" Even if it solves the problem, it's pretty arrogant. If anything, it's worse than RTFM. And if enough people are doing it wrong that you have to include the announcement, then maybe - just maybe - you've got a UX and/or design problem. Plus, current 3D glasses are like sitting in a darkened room, cosplaying the lovechild of Spider Jerusalem and Jarvis Cocker. Ok, so that observation was weirder than it was helpful; but seriously, nobody wants to wear the glasses if they don't have to. They ruin the visual experience of the non-3D sections, and personally, I find them pretty disruptive to the suspension of disbelief. This is an old, old, problem, and I'm carping on about it because Tron is enjoyable mass-market slush. It's easier for me to say "no, I can't just put some text on it. It's fundamentally broken, redesign it." in the middle of a small-ish, agile, software project than it would be for some beleaguered production assistant at the end of editing a $200 million movie. But lots of folks in software don't even get to do that. Way more people are going to see Tron, and be annoyed by this, than will ever read a technical communication blog. So hopefully, after two hours of being mildly annoyed, wanting to turn the brightness up, and slowly getting a headache, they'll realise something very, very important: you just can't document your way out of a shoddy UI.

    Read the article

  • AutoVue at the Oracle Asset Lifecycle Management Summit

    - by celine.beck
    I recently had the opportunity to attend and present the integration between AutoVue and Primavera P6 during the Oracle ALM Summit, which was held in March at Redwood Shores, on Oracle Headquarters grounds. The ALM Summit brought together over 300 Oracle maintenance practitioners who endured the foggy and rainy San Francisco weather to attend the 4th edition of this Oracle-driven conference. Attendees have roles in maintenance management and IT. Following a general session, Ralph Rio from ARC Advisory Group provided a very interesting keynote session discussing Asset Management directions, both in the short and long run. An interesting point that Ralph raised is that most organizations have done a good job at improving performance at the design / build, operate and maintain and portfolio management phases by leveraging solutions like Asset Lifecycle Management and Project & Portfolio management solutions; however, there seem to be room for improvement in between those phases, when information flows from one group to the other, during the data handover phase or when time comes to update / modify drawings to reflect the reality of physical assets. This is where AutoVue comes into play. By integrating with enterprise applications like content management systems, asset lifecycle management applications and project management solutions, AutoVue can be a real-process enabler, streamlining information flows from concept/design to decommissioning and ensuring that all project stakeholders have access to asset information and engineering data throughout the asset lifecycle. AutoVue's built-in digital annotation capabilities allows maintenance workers and technicians to report changes in configuration and visually capture the delta between as-built and as-maintained versions of asset documents. This information can then be easily handed over to engineers who can identify changes and incorporate these modifications into the drawings during the next round of document revisions. PPL Power Generation, an electric utilities headquarted in Allentown, Pennsylvania discussed this usage of AutoVue during an interesting Webcast around AutoVue's role in the Utilities space. After the keynote sessions, participants broke off into product-centric tracks around Oracle's Asset Lifecycle Management solutions (E-Business Suite, PeopleSoft, and JD Edwards). The second day of the conference was the occasion for us to present the integration between AutoVue and Primavera P6 to the Maintenance Summit audience. The presentation was a great success and generated much discussion with partners and customers during breaks. People seemed highly interested in learning more about our plans for integrating AutoVue and Primavera P6 with Oracle's ALM solutions...stay tune for further information on the subject!

    Read the article

  • SPSiteDataQuery Returns Only One List Type At A Time

    - by Brian Jackett
    The SPSiteDataQuery class in SharePoint 2007 is very powerful, but it has a few limitations.  One of these limitations that I ran into this morning (and caused hours of frustration) is that you can only return results from one list type at a time.  For example, if you are trying to query items from an out of the box custom list (list type = 100) and document library (list type = 101) you will only get items from the custom list (SPSiteDataQuery defaults to list type = 100.)  In my situation I was attempting to query multiple lists (created from custom list templates 10001 and 10002) each with their own content types. Solution     Since I am only able to return results from one list type at a time, I was forced to run my query twice with each time setting the ServerTemplate (translates to ListTemplateId if you are defining custom list templates) before executing the query.  Below is a snippet of the code to accomplish this. SPSiteDataQuery spDataQuery = new SPSiteDataQuery(); spDataQuery.Lists = "<Lists ServerTemplate='10001' />"; // ... set rest of properties for spDataQuery   var results = SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable();   // only change to SPSiteDataQuery is Lists property for ServerTemplate attribute spDataQuery.Lists = "<Lists ServerTemplate='10002' />";   // re-execute query and concatenate results to existing entity results = results.Concat(SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable());   Conclusion     Overall this isn’t an elegant solution, but it’s a workaround for a limitation with the SPSiteDataQuery.  I am now able to return data from multiple lists spread across various list templates.  I’d like to thank those who commented on this MSDN page that finally pointed out the limitation to me.  Also a thanks out to Mark Rackley for “name dropping” me in his latest article (which I humbly insist I don’t belong in such company)  as well as encouraging me to write up a quick post on this issue above despite my busy schedule.  Hopefully this post saves some of you from the frustrations I experienced this morning using the SPSiteDataQuery.  Until next time, Happy SharePoint’ing all.         -Frog Out   Links MSDN Article for SPSiteDataQuery http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsitedataquery.lists.aspx

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Improving CSS With .LESS

    Cascading Style Sheets, or CSS, is a syntax used to describe the look and feel of the elements in a web page. CSS allows a web developer to separate the document content - the HTML, text, and images - from the presentation of that content. Such separation makes the markup in a page easier to read, understand, and update; it can result in reduced bandwidth as the style information can be specified in a separate file and cached by the browser; and makes site-wide changes easier to apply. For a great example of the flexibility and power of CSS, check out CSS Zen Garden. This website has a single page with fixed markup, but allows web developers from around the world to submit CSS rules to define alternate presentation information. Unfortunately, certain aspects of CSS's syntax leave a bit to be desired. Many style sheets include repeated styling information because CSS does not allow the use of variables. Such repetition makes the resulting style sheet lengthier and harder to read; it results in more rules that need to be changed when the website is redesigned to use a new primary color. Specifying inherited CSS rules, such as indicating that a elements (i.e., hyperlinks) in h1 elements should not be underlined, requires creating a single selector name, like h1 a. Ideally, CSS would allow for nested rules, enabling you to define the a rules directly within the h1 rules. .LESS is a free, open-source port of Ruby's LESS library. LESS (and .LESS, by extension) is a parser that allows web developers to create style sheets using new and improved language features, including variables, operations, mixins, and nested rules. Behind the scenes, .LESS converts the enhanced CSS rules into standard CSS rules. This conversion can happen automatically and on-demand through the use of an HTTP Handler, or done manually as part of the build process. Moreover, .LESS can be configured to automatically minify the resulting CSS, saving bandwidth and making the end user's experience a snappier one. This article shows how to get started using .LESS in your ASP.NET websites. Read on to learn more! Read More >

    Read the article

  • Oracle AutoVue Key Highlights from Oracle OpenWorld 2012

    - by Celine Beck
    We closed another successful Oracle Open World for AutoVue. Thanks to everyone who joined us this year. As usual, from customer presentations to evening networking activities, there was enough to keep us busy during the entire event. Here is a summary of some of the key highlights of the conference: Sessions:We had two AutoVue-specific sessions during Oracle Open World this year. The first session was part of the Product Lifecycle Management track and covered how AutoVue can be used to help drive effective decision making and streamline design for manufacturing processes. Attendees had the opportunity to learn from customer speaker GLOBALFOUNDRIES how they have been leveraging Oracle AutoVue within Agile PLM to enable high degree of collaboration during the exceptionally creative phases of their product development processes, securely, without risking valuable intellectual property. If you are interested, you can actually download the presentation by visiting launch.oracle.com/?plmopenworld2012.AutoVue was also featured as part of the Utilities track. This session focused on how visualization solutions play a critical role in effective plant optimization and configuration strategies defined by owners and operators of power generation facilities. Attendees learnt how integrated with document management systems, and enterprise applications like Oracle Primavera and Asset Lifecycle Management, AutoVue improves change management processes; minimizes risks by providing access to accurate engineering drawings which capture and reflect the as-maintained status of assets; and allows customers to drive complex maintenance projects to successful completion.Augmented Business Visualization for Agile PLMDuring Oracle Open World, we also showcased an Augmented Business Visualization-based solution for Oracle Agile PLM. An Augmented Business Visualization (ABV) solution is one where your structured data (from Oracle Agile PLM for instance) and your unstructured data (documents, designs, 3D models, etc) come together to allow you to make better decisions (check out our blog posts on the topic: Augment the Value of Your Data (or Time to replace the “attach” button) and Context is Everything ). As part of the Agile PLM, the idea is to support more effective decision-making by turning 3D assemblies into color-coded reports, and streamlining business processes like Engineering Change Management by enabling the automatic creation of engineering change requests in Agile PLM directly from documents being viewed in AutoVue. More on this coming soon...probably during the Oracle Value Chain Summit to be held in San Francisco, from Feb. 4-6, 2013 in San Francisco! Mark your calendars and stay tuned for more information! And thanks again for joining us at Oracle OpenWorld!

    Read the article

  • Changing Focus on my Blog

    - by D'Arcy Lussier
    I try to limit these types of blog posts – the ones where I communicate some change as if I have a loyal subscriber base that will be somehow affected. Still, I think its of worth if for nothing else than to document for myself an acknowledgement that my career is evolving. For the last who knows how long, I’ve had this as my banner: It’s funny how technology focuses change over time. 3.5 – 4 years ago I was wanting to immerse myself in BizTalk. Then I shifted, focussing on Silverlight. I even started a short-lived Silverlight user group here in Winnipeg that had, IMO, one of the *best* UG logos ever (do a Google search for the old school Winnipeg Jets logo if you don’t catch the reference)… And even how I identified myself – as a Developer – isn’t really accurate anymore as I’ve shifted more into an architect/analyst role at Online Business Systems as well as getting much more involved in business development. So I’m switching the focus of this blog a bit. Nothing too great, but you’ll find my posts aren’t necessarily tied to a technology or platform. Instead I’ll be focussing on current passions and interests. Solution Architecture Before a line of code is written, a solution is envisioned. The process of performing solution analysis and architecture is an intriguing process that encompasses negotiation and interpersonal skills as much as technical knowledge. Business & Entrepreneurship Creating things, building things, and working with others – business is fascinating and exciting! Entrepreneurship, and intrapreneurship, are growing trends that I’ve been exploring over the last few years through my conference (www.prairiedevcon.com) and within Online. Microsoft At Online one of my roles is “Microsoft Practice Lead” and my entire career has been built around the Microsoft stack of technologies. That focus won’t change here on my blog, and there’s tonnes of exciting new products and technologies coming out of Redmond. Adoption This is a very personal subject that’s extremely close to my heart. I’m not talking about technology adoption, I’m talking about human adoption. Almost three years ago we adopted our first daughter, Sadie, and two years ago we adopted our second daughter, Skylar; an amazing new chapter in my life as I became a “parent”. Adoption is very much misunderstood, and many people have questions about it. Hopefully I can shed some light into our experiences and provide some guidance for those that are looking into it. So come along with me as I start chronicling the next phase of my career and life.

    Read the article

  • Fair Comments

    - by Tony Davis
    To what extent is good code self-documenting? In one of the most entertaining sessions I saw at the recent PASS summit, Jeremiah Peschka (blog | twitter) got a laugh out of a sleepy post-lunch audience with the following remark: "Some developers say good code is self-documenting; I say, get off my team" I silently applauded the sentiment. It's not that all comments are useful, but that I mistrust the basic premise that "my code is so clearly written, it doesn't need any comments". I've read many pieces describing the road to self-documenting code, and my problem with most of them is that they feed the myth that comments in code are a sign of weakness. They aren't; in fact, used correctly I'd say they are essential. Regardless of how far intelligent naming can get you in describing what the code does, or how well any accompanying unit tests can explain to your fellow developers why it works that way, it's no excuse not to document fully the public interfaces to your code. Maybe I just mixed with the wrong crowd while learning my favorite language, but when I open a stored procedure I lose the will even to read it unless I see a big Phil Factor- or Jeff Moden-style header summarizing in plain English what the code does, how it fits in to the broader application, and a usage example. This public interface describes the high-level process and should explain the role of the code, clearly, for fellow developers, language non-experts, and even any non-technical stake holders in the project. When you step into the body of the code, the low-level details, then I agree that the rules are somewhat different; especially when code is subject to frequent refactoring that can quickly render comments redundant or misleading. At their worst, here, inline comments are sticking plaster to cover up the scars caused by poor naming conventions, failure in clarity when mapping a complex domain into code, or just by not entirely understanding the problem (/ this is the clever part). If you design and refactor your code carefully so that it is as simple as possible, your functions do one thing only, you avoid having two completely different algorithms in the same piece of code, and your functions, classes and variables are intelligently named, then, yes, the need for inline comments should be minimal. And yet, even given this, I'd still argue that many languages (T-SQL certainly being one) just don't lend themselves to readability when performing even moderately-complex tasks. If the algorithm is complex, I still like to see the occasional helpful comment. Please, therefore, be as liberal as you see fit in the detail of the comments you apply to this editorial, for like code it is bound to increase its' clarity and usefulness. Cheers, Tony.

    Read the article

  • BI Publisher - Hottest Show in Vegas

    - by mike.donohue
    Two days down, two to go. Monday was a very busy and rewarding day. Attended "XML Publisher and FSG for Beginners" given by Susan Behn and Alyssa Johnson from Solution Beacon. It was packed, standing room only ... even though it was at 8:00 am. Later in the afternoon, despite being at the same time and in conflict with other Publisher related sessions, Noelle's session, "The Reporting Platform for Applications: Oracle Business Intelligence Publisher" and my session, "Introduction to Oracle Business Intelligence Publisher" were both very well attended. Immediately following our presentations we ran the BI Publisher Hands On Lab which was great fun. The turnout was so large that unfortunately we could not accommodate everyone who came to the lab. There were as many as 5 people huddled around each of the 20 machines. All the the groups completed the 2 main exercises. Some groups even took the product for an off-road test drive. Look at all the fun we had ... For those who could not attend or want the Hands On Lab document: Hands On Lab Oracle BI Publisher Collaborate 2010.pdf Note that these lab instructions assume a specific set up and files that you may not have in your environment. You can download and install a trial license version of BI Publisher from the download page. Highly recommend taking a look at the additional Tutorials available on OTN. Big thanks to Dan Vlamis and Jonathan Clark from Vlamis Software Solutions and to the Oracle BIWA SIG for setting up these machines and getting the time and space to run this lab. It was inspiring to see all of the attendees successfully creating reports. On Tuesday morning we were up early again for a rousing session of BI Publisher Best Practices that was also, very well attended especially considering the 8 am start. Later that morning saw Ben Bruno from STR Software and two of his customers speak on the additional functionality and ROI they have achieved by using Publisher within EBS and AventX to FAX and Email Publisher generated documents. Spent the afternoon staffing the BI Technology demo pod and had a steady flow of people dropping by with questions. Having a great conference so far and looking forward to the rest of it.

    Read the article

< Previous Page | 532 533 534 535 536 537 538 539 540 541 542 543  | Next Page >