Search Results

Search found 1550 results on 62 pages for 'recover'.

Page 55/62 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Customizing UPK outputs (Part 1)

    - by [email protected]
    If you are familiar with Oracle's User Productivity Kit, you are aware that UPK is a great product for rapidly developing application training. Did you know that you can also customize the UPK outputs to incorporate your company's logo, colors, and preferred styles? There are several areas that support customization: Logo - Within the developer, you can change the logo for all outputs at one time. Player - The player output uses a style sheet that can be updated to change colors, graphics and other visual branding. Documentation - The print documentation uses a Word-based template that can be modified to match your corporate standards. I'll discuss the first one today, and we'll cover the others in subsequent blogs. Before you begin: If you are working in a multi-user environment, ensure that you have "Modify" permissions for the Styles directory under the Publishing folder. Make a copy of the current styles. This recommendation is for backup purposes. If something goes wrong, you will have a way to recover. Consider creating your own category by creating a new folder under the Styles directory, and then copying the styles into your new folder. When you upgrade to future versions, the system will overwrite the standard styles with any new feature additions and updates that have been made. With your own category, all of your customizations will remain intact. To update the logos in all outputs: From the Tools Menu, choose Customize Logo. Select the category if necessary. Browse to select your logo. You can use any size logo, in any graphic format (*.bmp, *.gif, *.jpeg, *.jpg, *.png, or *.tif). The system will make a copy of your logo and add it to each of the publishing styles. Choose OK, and the update process begins. It may take a few minutes. Helpful hints: The logo you select is used "as is" - no resizing or cropping occurs during this process. The Customize Logo process automates replacing all the logo graphics for online deployment (small_logo.gif and large_logo.gif) and the headers in the documentation outputs. You can manually replace these graphics on an individual style basis if you prefer. The recommended logo size is 230 pixels wide x 44 pixels high. Prior to updating the logos, the system will display the size of the selected logo. If you use a logo that is much larger than the recommended size, the heading area will resize to fit the new logo, but that will impact the space available for your training material. If you are using a multi-user environment, the system will check out the publishing styles to you for the logo updates. After you review the styles, remember to check them in so the rest of your team can access the new changes. I'd be interested in hearing (or seeing) how you brand your UPK. Feel free to share in the comments! --Maria Cozzolino, Manager of Requirements & UI for UPK Product Development PS. For those of you who want to customize the player and documentation NOW, check out the detailed instructions in the Publishing Content chapter of the Content Development Guide.

    Read the article

  • Getting Started with StreamInsight 2.1

    - by Roman Schindlauer
    If you're just beginning to get familiar with StreamInsight, you may be looking for a way to get started. What are the basics? How can I get my first StreamInsight application running so I can see how it works? Where is the 'front door' that will get me going? If that describes you, then this blog entry might be just what you need. If you're already a StreamInsight wiz, keep reading anyway - you may find some helpful links here that you weren't aware of. But here's what we'd like from you experienced readers in particular: if you know of other good resources that we missed, please feel free to add them in the comments below. We appreciate you sharing your expertise. The Book The basic documentation for StreamInsight is located in the MSDN Library (Microsoft StreamInsight 2.1). You'll notice that previous versions of StreamInsight are still there (1.2 and 2.0), but if you're just getting started you can stick to the 2.1 section. The documentation has been organized to function as reference material, which is fine after you're familiar with the technology. But if you're trying to learn the basics, you might want to take a different path instead of just starting at the top. The following is one map you can use. What Is StreamInsight? Here is a sequence of topics that should give you a good overview of what StreamInsight is and how it works: Overview answers the question, "what is it?" StreamInsight Server Architecture gives you a quick look at a high-level architectural drawing StreamInsight Concepts lays out an overview of the basic components Deploying StreamInsight Entities to a StreamInsight Server describes the mechanics of how these components work together Getting an Example Running Once you have this background, go ahead and install StreamInsight and get a basic example up and running: Installation download and install the software StreamInsight Examples walk through a set of 3 simple StreamInsight applications that work together to demonstrate what you learned in the topics above; you can copy and paste the code into Visual Studio, compile, and run That's it - you now have a real, functioning StreamInsight system! Now that you have a handle on the basics, you might want to start digging deeper. Digging Deeper Here's a suggested path through the documentation to help you understand the next layer of StreamInsight technologies: Using Event Sources and Event Sinks sources supply data and sinks consume it; this topic gives you an overview of how they work Publishing and Connecting to the StreamInsight Server practical details on how to set up a StreamInsight server A Hitchhiker’s Guide to StreamInsight 2.1 Queries queries are the heart of how StreamInsight performs data analytics, and this whitepaper will help you really understand how they work Using StreamInsight LINQ root through this section for technical details on specific query components Using the StreamInsight Event Flow Debugger in addition to troubleshooting, the debugger is a great way to learn more about what goes on inside a StreamInsight application And Even Deeper Finally, to get a handle on some of the more complex things you can do with StreamInsight, dig into these: Input and Output Adapters adapters can be useful for handling more complex sources and sinks Building Resilient StreamInsight Applications a resilient application is able to recover from system failures Operations this section will help you monitor and troubleshoot a running StreamInsight system The StreamInsight Community As you're designing and developing your StreamInsight solutions, you probably will find it helpful to see working examples or to learn tips and tricks from others. Or maybe you need a place to post a vexing question. Here are some community resources that we have found useful. If you know of others, please add them in the comments below. Code samples and tools Official StreamInsight code samples Introduction to LinqPad Driver for StreamInsight 2.1 - LinqPad is a very useful tool for developing queries The following case studies are based on earlier versions of StreamInsight, but they still are useful examples: Microsoft Media Analytics - real-time monitoring and analytic Edgenet - responding to information from multiple source ICONICS - managing energy usage Blogs Microsoft StreamInsight Ruminations of J.net Richard Seroter's Architecture Musings pluralsight Forums MSDN StreamInsight Forum stackoverflow Training Microsoft StreamInsight Fundamentals (“Introducing StreamInsight” is free) from pluralsight Twitter @streaminsight   You’re a StreamInsight Expert That should get you going. Please add any other resources you have found useful in the comments below.   Regards, The StreamInsight Team

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Can't boot in Ubuntu after windows upgrade

    - by VanceAnce
    After my lastest update for Ubuntu and Windows XP, I got a Grub error on booting the next day. ls lists the following (without () ): sd0 sd1, msdos sd2 sd5 sd6 When I tried to get into one with (sd0,xy)/ it doesn't detect system or unknown file system error. I tried to boot to a live session with a Knoppix live CD and found out that all data exists. I also tried to recover with TestDisk and it finds all systems. Here is the test disk result: Start End Size in sectors 1 * HPFS - NTFS 0 1 1 7079 254 63 113740137 2 E extended LBA 7080 0 1 12161 254 63 81642330 5 L HPFS - NTFS 7080 1 1 10266 254 63 51199092 [Schule] X extended 12031 30 1 12161 254 63 2102625 6 L Linux Swap 12031 31 33 12161 254 63 2102530 I've 1 winxp-home, 1x Ubuntu (ext3+swap) and 1 winxp prof and then I wrote on mbr with TestDisk but I always get the same errors with Grub. What should I do? I need both XP and Ubuntu. Help me please. more infos in answers below - sry for thos confusing style but im working on diff live system and browsers and have to reboot always the boot info script output is also down below maybe an advanced user can correct my fail posting - after i can solve my issuse i will register here thanx and pls help me with those weired issues ! as i still cant just comment my own answer or those on top i again has to put it here as a sepperate answer..... (or even edit - maybe an browser failur using the live cds ... cause this posti can edit) here the bootinfo script output - but the result is the same as with TestDisk ... but it looks worse - cause it also doesnt detect my old ubuntu ... but there wasnt a eares process or overwrite process visibile ending the last working session output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sda. sda1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /ntldr /NTDETECT.COM sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 63. Operating System: Windows XP Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 100.0 GB, 100030242816 bytes 255 heads, 63 sectors/track, 12161 cylinders, total 195371568 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 113,740,199 113,740,137 7 NTFS / exFAT / HPFS /dev/sda2 113,740,200 195,382,529 81,642,330 f W95 Extended (LBA) /dev/sda5 113,740,263 164,939,354 51,199,092 7 NTFS / exFAT / HPFS /dev/sda6 193,280,000 195,382,529 2,102,530 82 Linux swap / Solaris /dev/sda2 ends after the last sector of /dev/sda /dev/sda6 ends after the last sector of /dev/sda "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 6596D86768011128 ntfs /dev/sda5 1300D3B7744EC141 ntfs Schule /dev/sda6 5b95f2a1-4145-43a5-ac51-41d7dd32b213 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) ================================ sda1/boot.ini: ================================ [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /fastdetect /NoExecute=OptOut multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect [spybotsd] timeout.old=30 the last part shows that i now use the windows boot loader so that i can acces at least one OS but shouldnt i also get acces to my ubuntu partitions with live-linux-cds ? or do i have to boot with grub to get to those files only ??

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 2012

    - by Bob Rhubart
    Every day ArchBeat searches the web for content created by and for community members, and then shares that content via social media. Here's the list of the Top 10 most popular items posted on the OTN ArchBeat Facebook Page for November 2012. One-Stop Shop for Oracle Webcasts Webcasts can be a great way to get information about Oracle products without having to go cross-eyed reading yet another document off your computer screen. Oracle's new Webcast Center offers selectable filtering to make it easy to get to the information you want. Yes, you have to register to gain access, but that process is quick, and with over 200 webcasts to choose from you know you'll find useful content. OAM/OVD JVM Tuning Vinay from the Oracle Fusion Middleware Architecture Group (otherwise known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. White Paper: Oracle Exalogic Elastic Cloud: Advanced I/O Virtualization Architecture for Consolidating High-Performance Workloads This new white paper by Adam Hawley (with contributions from Yoav Eilat) describes in great detail the incorporation into Oracle Exalogic of virtualized InfiniBand I/O interconnects using Single Root I/O Virtualization (SR-IOV) technology. Architected Systems: "If you don't develop an architecture, you will get one anyway..." "Can you build a system without taking care of architecture?," asks Manuel Ricca. "You certainly can. But inevitably the system will be unbalanced, neglecting the interests of key stakeholders, and problems will soon emerge." Backup and Recovery of an Exalogic vServer via rsync "On Exalogic a vServer will consist of a number of resources from the underlying machine," says the man known only as Donald. "These resources include compute power, networking and storage. In order to recover a vServer from a failure in the underlying rack all of these components have to be thoughts about. This article only discusses the backup and recovery strategies that apply to the storage system of a vServer." This Week on the OTN Architect Community Home Page Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Technical article by Yuli Vasiliev on Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster Podcast: Are You Future Proof? Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. OIM 11g : Multi-thread approach for writing custom scheduled job | Saravanan V S Saravanan shares insight and expertise relevant to "designing and developing an OIM schedule job that uses multi threaded approach for updating data in OIM using APIs." How to Create Virtual Directory in Weblogic Server | Zeeshan Baig Oracle ACE Zeeshan Baig shows you how in six easy steps. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Thought for the Day "Humans are the best value in computers -- where else can you get a non-linear computer weighing only about 160lbs, having a billion binary decision elements, that can be mass-produced by unskilled labour?" — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Business School graduate joins Oracle

    - by jessica.ebbelaar(at)oracle.com
    My name is Mathias, I work as an Applications Inside Sales Rep for the French market, and I’d like to give you a brief snapshot of my experience at Oracle. First things first, how did you hear about Oracle? Where have you seen the sharp and recognizable red logo? Was it in Charles de Gaulle Airport when your eyes crossed the 20-metre banner with a picture of a strange big machine in the middle? Was it through reading the Forbes 10 top IT companies worldwide ranking? Or is it because IT is your thing and you cannot but know one of the “big four”? Meeting with a Grenoble Alumnus My story is a little different. My plan was to work in sales, in the IT industry. I had heard about Oracle, but my opinion at the time was that this kind of multinational company was way out of reach for a young graduate, even with high enthusiasm and great excitement to be (finally) on the job market. So, I was really surprised when I had an interesting conversation with a top alumnus of my business school. We were at the Grenoble Ecole de Management graduation ceremony (our graduation!), and before the party got really started, I got to chat with her. She told me of the great experience she was getting by living and working in Dublin. She had already figured it all out: “you work with another 100 young people from 10 different nationalities across Europe, you can be based in Dublin, but then once you work really hard you can move to Malaga Spain or other BUs around the world, you can work with different lines of business and learn about new “techy” and business oriented products, move to the field in your home country or elsewhere, etc.” What, what, what? Moving around Europe, trained by the best sales coaches in the world, acquiring strong IT knowledge and getting on board with one of fastest-growing and most watched companies in the world? Well, I was in. The next day (OK, 3 days after, the time to recover), I sent her my CV, and 3 months later I started as a Business Development Consultant at Oracle in Dublin, representing the latest cloud based CRM across the French market. That was 15 months ago. Since then, I moved line of business twice, I’m always learning new things and working with different and senior stakeholders; I have attended hundreds of hours of sales and product training (priceless when you come from a business background); I passed the Dublin Institute of Technology Sales Certification through different trainings given onsite within Oracle; I’ve led projects based around social media and I’ve gotten involved within various sales deals going on my market. Despite all of these great things, two will remain in my spirit: the multiculturalism that I experience every day in the office, and the American style of management - more direct and open than what you can find in “regular French companies”. Sales Progression Board In May 2012, I passed what we call a ‘Sales Progression Board’ to be promoted to an Inside Sales position. I am now in charge of generating revenue through the sale of Oracle applications on my specific territory. Always keeping in my mind my personal ambition: going to the field one day. Interested to join Oracle in the same role as Mathias? Visit http://campus.oracle.com.

    Read the article

  • A developer&rsquo;s WBS &ndash; 3 factors of 5

    - by johndoucette
    As a development manager, I have requested work breakdown structures (WBS) many times from the dev leads. Everyone has their own approach and why it takes sometimes days to get this simple list is often frustrating. Here is a simple way to get that elusive WBS done in 30 minutes and have 125 items in your list – well, 126. The WBS is made up of parent-child entities representing the overall outcome of the project. At the bottom of the hierarchical list should be the task item that a developer would perform in support of the branch in the list or WBS. Because I work with different dev leads on every project, I always ask the “what time value would you like to see at the lowest task in order to assign it to a developer and ensure it gets done within the timeframe”. I am particular to a task being 8 hours. Some like 8 to 24 hours. Stay away from tasks defaulting to 1 week. The task becomes way to vague and hard to manage completeness, especially on short budgets. As a developer, your focus is identifying the tasks you to accomplish in order to deliver the product. As a project manager, you will take the developer's WBS and add all the “other stuff” like quality testing, meetings, documentation, transition to maintenance, etc… Start your exercise with the name of the product you are delivering as a result of the project. You should be able to represent what you are building and deploying with one to three words. Example; XYZ Public Website Middleware BizTalk Application The reason you start with that single identifier is to always see the list as the product. It helps during each of the next three passes. Now, choose 5 tasks which in their entirety represent the product you will be delivering and add them to list under the product name you created earlier; Public Website     Security     Sites     Infrastructure     Publishing     Creative Continue this concept of seeing the list as the complete picture and decompose it one more level. You should have 25 items. Public Website     Security         Authentication         Login Control         Administration         DRM         Workflow     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... And one more time for a total of 125 items. The top item makes the list 126. Public Website     Security         Authentication             Install (AD/ADAM/LDAP/SQL)             Configuration             Management             Web App Configuration             Implement Provider         Login Control             Login Form             Login/Logoff             pw change             pw recover/forgot             email verification         Administration             ...         DRM             ...         Workflow             ...     Sites         Masterpages         Page Layouts         Web Parts (RIA, Multimedia)         Content Types         Structures     Infrastructure         ...     Publishing         ...     Creative         ... The next step is to make sure the task at the bottom of every branch represents the “time value” you planned for the project. You can add more to the WBS and of course if you can’t find 5 items, 4 is fine. If a task can be done in a fraction of the time value you determined for the project, try to roll it up into a larger task. In the task actions (later when the iteration is being planned), decompose the details back to the simple tasks. Now, go estimate!

    Read the article

  • Grub Rescue Unknown Filesystem Error. Grub Corrupted or Filesystem?

    - by nightcrawler
    Now it has happened twice & have been pulling my hairs now... I have installed xubuntu on my external hardisk & have been using it for about 3 months. It has three partitions, one of 500 mb mounted at /boot, 2nd one of 48gb mounted at / & the rest (out of 160gb) is ntfs partition....used as normal external storage. The last storage supposedly acts as a buffer b/w Linux distributions & Win platform, buffer in the sense that it provides a universal channel for data transfers. I have constantly used this external hardisk for data transfers b/w win7 laptop & xubuntu (on this external hd) without any hassle. However, on of my desktops where I have ubuntu I (for the first time) attached this external drive which let me do data transfers where all three partitions properly mounted....but then nasty thing occurred the same that occurred before. I (as usual) tried booting via this external hd (one having xubuntu, one having being formerly used under Ubuntu) I got error Now I am totally devastated because similar thing happened ~6months before when I had fedora 17 in my external hd (instead of xubuntu) & after it was used under ubuntu the same happened...i didn't reported it because I already had planned towards debian instead of rpm! The mystery is that as long as I don't attach this external hd under ubuntu the data never** corrupts whereas under win xp/7 I can use it as a normal usb storage of coarse linux partitions aren’t available under win platforms... **From corrupts I mean hd fails to boot with the error mentioned however cant say whether data within remains untouched? It seems that my grub & or MBR is corrupted. Please sir guide me to solve this issue also why I cant attach & use linux external hds under linux platform Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004e7d0 Device Boot Start End Blocks Id System /dev/sdc1 * 2048 976895 487424 83 Linux /dev/sdc2 978942 96874495 47947777 5 Extended /dev/sdc3 96874496 312575999 107850752 7 HPFS/NTFS/exFAT /dev/sdc5 978944 94726143 46873600 83 Linux /dev/sdc6 94728192 96874495 1073152 82 Linux swap / Solaris I can recall for sure that have seen a thread here when a similar problem occurred & in response someone gave solution of how to mount (now invisible) partitions & recover important data in them. I have misplaced that URL so if any can guide me thither because my important documents resides in / partition What I already have done: Without success I have tried this & related solutions What I plan to do: I believe that filesystem has corrupted & would you recommend solution like this provided I cant recall whether my /boot (500mb) partition was ext4 or ext2 though I am sure that my / (48gb) partition was ext4 UPDATE 1 Attached my external hd under Ubuntu ran followinf command as root grub-install /dev/sdc where /dev/sdc was my external hd containing corrupted xubuntu....it reported all done! I re-ran fdisk -l but to my disappointment it reported Disk /dev/sdc: 160.0 GB, 160041884672 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581806 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1b6b9167 Disk /dev/sdc doesn't contain a valid partition table ...& now I can't even access its ntfs partition (former /dev/sdc3) please help? UPDATE 2 TestDisk (by cgsecurity) failed at founding any partition table :( TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org Disk /dev/sdc - 160 GB / 149 GiB - CHS 19457 255 63 Partition Start End Size in sectors

    Read the article

  • Package management system corrupted. Cannot install or remove packages. U12.04LTS

    - by user271490
    Having read other posts, I believe that this may be less about samba than about update system. Below is the log file of the failed installation of Samba. I have been trying without success to install/outstall samba so that I could install anything else ... I cannot either install or remove samba using either update-manager or apt-get (nor indeed Software Centre). One of the errors that I have had to correct is the presence after "removal" (failed) of /usr/share/system-config-samba directory which finally allowed itself to be deleted. That, however was then ... I have U12.04LTS. running on release 63 because I allowed the upgrade to 64 this morning which fell over - no output to monitor - obviously even less support for my graphic chip than I am suffering already (see other posts in this forum). According to my interpretation of the dpkg returned errors there may be some problem with the package files, but if this is the case then it is on servers 'main', 'nantes uni fr' and 'best fr' at the very least if not everywhere. The suggestions offered at Package operation failed and elsewhere have not worked for me. This linked post suggests that a similar error is present in other packages, or that the error is in the 'update system' I have tried ... sudo apt-get remove samba ... autoremove ... install samba ... clean ... update -f all of the above In update-manager I have tried the "reload packages list" which fails to terminate because of the error. I have tried to install and remove samba from the software centre ... :( I am at a loss ... I need help, please! Firstly to recover my apt-get/update-manager/Software Centre so that I can at least carry on with my continuing installation - up to communicating with home network hence need for samba - which brings me to my second requirement ... samba. PS is the issue about "MaxReports" associated or apart? UPDATE! Being heartily sick of restarting FF every 5 seconds I thought I'd try again with Chromium ... and got the same errors from dpkg about corrupt compressed package - coincidence? Of course this was no longer in clipboard when I got here because apport has just errored ... AAARRRGGGH!!! Why does every error clear the clipboard? Thanks for any and all help!! installArchives() failed: Preconfiguring packages ... ... snip (Reading database ... ... snip (Reading database ... 184858 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.6.3-2ubuntu2.10_i386.deb) ... dpkg-deb (subprocess): data: internal gzip read error: ': data error' dpkg-deb: error: subprocess returned error exit status 2 dpkg: error processing /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports is reached already Selecting previously unselected package system-config-samba. Unpacking system-config-samba (from .../system-config-samba_1.2.63-0ubuntu5_all.deb) ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for ufw ... Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb Error in function: dpkg: dependency problems prevent configuration of system-config-samba: system-config-samba depends on samba; however: Package samba is not installed. dpkg: error processing system-config-samba (--configure): dependency problems - leaving unconfigured

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • Unable to "Run on Server" a webapp from Eclipse

    - by Ram
    When running my WebApp project from Eclipse most of times it run correctly. But if by mistake to stop server, I kill it in "Console" view on instead of "Stop" Server from "Servers" View. While running clean project I get this java.lang.NullPointerException at org.eclipse.wst.common.componentcore.internal.util.VirtualReferenceUtilities.getDefaultProjectArchiveName(VirtualReferenceUtilities.java:81) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getJavaClasspathReferences(J2EEModuleVirtualComponent.java:332) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getNonManifestRefs(J2EEModuleVirtualComponent.java:236) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:160) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:208) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:201) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.hasConsumableReferences(SingleRootUtil.java:217) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.validateSingleRoot(SingleRootUtil.java:165) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.isSingleRoot(SingleRootUtil.java:93) at org.eclipse.jst.common.internal.modulecore.SingleRootExportParticipant.canOptimize(SingleRootExportParticipant.java:84) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.canOptimize(FlatVirtualComponent.java:136) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.cacheResources(FlatVirtualComponent.java:118) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.fetchResources(FlatVirtualComponent.java:101) at org.eclipse.wst.web.internal.deployables.FlatComponentDeployable.members(FlatComponentDeployable.java:147) at org.eclipse.wst.server.core.internal.ModulePublishInfo.hasDelta(ModulePublishInfo.java:418) at org.eclipse.wst.server.core.internal.ServerPublishInfo.hasDelta(ServerPublishInfo.java:443) at org.eclipse.wst.server.core.internal.Server.hasPublishedResourceDelta(Server.java:1539) at org.eclipse.wst.server.core.internal.Server$ResourceChangeJob$1.visit(Server.java:214) at org.eclipse.wst.server.core.internal.Server.visitModule(Server.java:2929) at org.eclipse.wst.server.core.internal.Server.visit(Server.java:2913) at org.eclipse.wst.server.core.internal.Server$ResourceChangeJob.run(Server.java:225) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) And while launching I get this SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base D:\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\Webapp does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(FileDirContext.java:142) at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4319) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4488) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:840) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) Please help recover from this.

    Read the article

  • Xcode: Unable to open project... cannot be opened because the project file cannot be parsed...

    - by Chris Butler
    Hi everyone. I have been working for a while to create an iPhone app. Today when my battery was low, I was working and constantly saving my source files then the power went out... Now when I plugged my computer back in and it is getting good power I try to open my project file and I get an error: "Unable to Open Project Project ... cannot be opened because the project file cannot be parsed." Is there a way that people know of that I can recover from this? I tried using an older project file and re inserting it and then compiling. It gives me a funky error which is probably because it isn't finding all the files it wants... I really don't want to rebuild my project from scratch if possible. Thanks in advance. EDIT Ok, I did a diff between this and a slightly older project file that worked and saw that there was some corruption in the file. After merging them (the good and newest parts) it is now working. Great points about the SVN. I have one, but there has been some funkiness trying to sync XCode with it. I'll definitely spend more time with it now... ;-) Thanks for everyone's comments and suggestions.

    Read the article

  • Gmail IMAP OAuth for desktop clients

    - by Sabya
    Recently Google announced that they are supporting OAUth for Gmail IMAP/SMTP. I browsed through their multiple documentations, but still I am confused about if they support OAuth for installed applications. 1. In this documentation they say: Note: Though the OAuth protocol supports the desktop/installed application use case, Google only supports OAuth for web applications. But they also have a document for OAuth for installed applications. 2. When I read the OAuth specification pointed by them, it says (in section 11.7): In many applications, the Consumer application will be under the control of potentially untrusted parties. For example, if the Consumer is a freely available desktop application, an attacker may be able to download a copy for analysis. In such cases, attackers will be able to recover the Consumer Secret used to authenticate the Consumer to the Service Provider. Also I think the disclaimer in point 1 above is about Google Data APIs, and surely IMAP/SMTP is not a part of them. I understand that for installed applications I can have a setup like: Have a small web-app at say example.com for my application. This web-app talks to Google gets the access token. The installed application talks to example.com only to get the access token. Installed application then talks to Google with the access token. I am now confused. Is this the only way?

    Read the article

  • Updating Lists of Lists in Tapestry4 using textfields and a single submit button

    - by Nicolas Scarrci
    In Tapestry 4 I am trying it iterate over a list of lists (technically a list of objects who have a list of strings as a data field). I am currently doing this by using 'nested' for components. (This is pseudo code) <span jwcid="Form"> <span jwcid="@For" source="ognl:Javaclass.TopLevelList" value="ognl:SecondLevelList" index="ognl:index"> <span jwcid="@For" source="ognl:SecondLevelList.List" value="ognl:ListItem" index="ListItemIndex"> <span jwcid="@TextField" value="ognl:ListItem"/> <span jwcid="@Submit" listener="ognl:listeners.onSubmit"/> </span></span></span> The onSubmit listener then accesses the index and ListItem index page properties, as well as the ListItem page property in order to correctly update the list in Javaclass.TopLevelList. This works fine, but it looks terrible, and is cumbersome to the end user. I would prefer to somehow simulate this functionality using only one submit button at the bottom of the page. I have looked into somehow using the overlying form component to obtain a list of the 'form control components' within it, and then (with great care) parsing through tapestry's naming conventions to recover the functionality of the indexes. If anyone knows how to do this, or could explain the form component (how/when it submits, etc.) it would be greatly appreciated.

    Read the article

  • JEE6 Global JNDI Name and Maven Deployment

    - by wobblycogs
    I'm having some problems with the global JNDI names of my EJB resources which is (or at least will) cause my JNDI look ups to fail. The project is being developed on Netbeans and is a standard Maven Web Application. When my application is deployed to GF3.0 the application name is set to something like: com.example_myapp_war_1.0-SNAPSHOT which is all well and good from Netbeans point of view because it ensures the name is unique but it also means all the EJBs get global names such as this: java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService This, of course, is going to cause problems because every time the version changes all the global names change (I've tested this by changing the version and the names indeed changed). The name is being generated from the POM file and it's a concatenation of: <groupId>com.example</groupId> <artifactId>myapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> Up until now I've got away with just injecting all the resources using @EJB but now I need to access the CustomerService EJB from a JSF Converter so I'm doing a JNDI look up like this: try { Context ctx = new InitialContext(); CustomerService customerService = (CustomerService)ctx.lookup( "java:global/com.example_myapp_war_1.0-SNAPSHOT/CustomerService" ); return customerService.get(submittedValue); } catch( Exception e ) { logger.error( "Failed to convert customer.", e ); return null; } which will clearly break when the application is properly released and the module name changes. So, the million dollar question: how can I set the modle name in maven or how do I recover the module name so that I can programatically build the JNDI name at runtile. I've tried setting it in the web.xml file as suggested by that link but it was ignored. I think I'd rather build the name at runtime as that means there is less scope for screw ups when the application is deployed. Many thanks for any help, I've been tearing my hair out all day on this.

    Read the article

  • Flex AdvancedDataGrid - ColumnOrder With Formatter and ItemRenderer Question For Experts

    - by robin1126
    I have a advanceddatagrid that has around 15 columns. Some are string, some are numbers. I have shown 4 columns below. The number columns have formatting done for zero precision and 2 digits precision. The itemRenderer is just to show Blue Color if the number is +ve and Red Color if the number is -ve. It looks something like below ... I am trying to save the users setting of column order when he closes the application and reload with the same order when the user opens the applications. I am using SharedObjects and below is the code. for(var i:int=0; i< adgrid.columns.length;i++){ var columnObject:Object = new Object(); columnObject.columnDataField = adgrid.columns[i].dataField as String; columnObject.columnHeader =adgrid.columns[i].headerText as String; columnObject.width = adgrid.columns[i].width; columnArray.push(columnObject); } and then I save the columnArray to SharedObject. I retrive them using the below code for(var i:int=0;i adgrid.columns[i].dataField =columnArray[i].columnDataField; adgrid.columns[i].headerText =columnArray[i].columnHeader; adgrid.columns[i].width = columnArray[i].width; } How can I save and reload the Formatter and ItemRenderer data . I am having trouble saving the formatter and itemrenderer and reloading it again. I would really appreciate if you can shown the code. How can I reshuffle the columns but can preserve all the properties of it to the sharedobject and recover it again.

    Read the article

  • Migrating data from Plone to Liferay, or how could I retrieve information from Plone's Data.fs

    - by brandizzi
    Hello, all. I need to migrate data from a Plone-based portal to Liferay. Has anyone some idea on how to do it? Anyway, I am trying to retrieve data from Data.fs and store it in a representation easier to work, such as JSON. To do it, I need to know which objects I should get from Plone's Data.fs. I already got the Products.CMFPlone.Portal.PloneSite instance from the Data.fs, but I cannot get anything from it. I would like to get the PloneSite instance and do something like this: >>> import ZODB >>> from ZODB import FileStorage, DB >>> path = r"C:\Arquivos de programas\Plone\var\filestorage\Data.fs" >>> storage = FileStorage.FileStorage(path) >>> db = DB(storage) >>> conn = db.open() >>> root = conn.root() >>> app = root['Application'] >>> plone_site = app.getChildNodes()[13] # 13 would be index of PloneSite object >>> a = plone_site.get_articles() >>> for article in a: ... print "Title:", a.title ... print "Content:", a.content Title: <some title> Conent: <some content> Title: <some title> Conent: <some content> Of course, it did not need to be so straightforward. I just want some information about the structure of PloneSite and how to recover its data. Has anyone some idea? Thank you in advance!

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • FileSystem.GetFiles() + UnauthorizedAccessException error?

    - by OverTheRainbow
    Hello, It seems like FileSystem.GetFiles() is unable to recover from the UnauthorizedAccessException exception that .Net triggers when trying to access an off-limit directory. In this case, does it mean this class/method isn't useful when scanning a whole drive and I should use some other solution (in which case: Which one?)? Here's some code to show the issue: Private Sub bgrLongProcess_DoWork(ByVal sender As System.Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles bgrLongProcess.DoWork Dim drive As DriveInfo Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String 'Scan all fixed-drives for MyFiles.* For Each drive In DriveInfo.GetDrives() If drive.DriveType = DriveType.Fixed Then Try 'How to handle "Access to the path 'C:\System Volume Information' is denied." error? filelist = My.Computer.FileSystem.GetFiles(drive.ToString, FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") For Each filepath In filelist DataGridView1.Rows.Add(filepath.ToString, "temp") 'Trigger ProgressChanged() event bgrLongProcess.ReportProgress(0, filepath) Next filepath Catch Ex As UnauthorizedAccessException 'How to ignore this directory and move on? End Try End If Next drive End Sub Thank you. Edit: What about using a Try/Catch just to have GetFiles() fill the array, ignore the exception and just resume? Private Sub bgrLongProcess_DoWork(ByVal sender As System.Object, ByVal e As System.ComponentModel.DoWorkEventArgs) Handles bgrLongProcess.DoWork 'Do lengthy stuff here Dim filelist As Collections.ObjectModel.ReadOnlyCollection(Of String) Dim filepath As String filelist = Nothing Try filelist = My.Computer.FileSystem.GetFiles("C:\", FileIO.SearchOption.SearchAllSubDirectories, "MyFiles.*") Catch ex As UnauthorizedAccessException 'How to just ignore this off-limit directory and resume searching? End Try 'Object reference not set to an instance of an object For Each filepath In filelist bgrLongProcess.ReportProgress(0, filepath) Next filepath End Sub

    Read the article

  • AST with fixed nodes instead of error nodes in antlr

    - by ahe
    I have an antlr generated Java parser that uses the C target and it works quite well. The problem is I also want it to parse erroneous code and produce a meaningful AST. If I feed it a minimal Java class with one import after which a semicolon is missing it produces two "Tree Error Node" objects where the "import" token and the tokens for the imported class should be. But since it parses the following code correctly and produces the correct nodes for this code it must recover from the error by adding the semicolon or by resyncing. Is there a way to make antlr reflect this fixed input it produces internally in the AST? Or can I at least get the tokens/text that produced the "Tree Node Errors" somehow? In the C targets antlr3commontreeadaptor.c around line 200 the following fragment indicates that the C target only creates dummy error nodes so far: static pANTLR3_BASE_TREE errorNode (pANTLR3_BASE_TREE_ADAPTOR adaptor, pANTLR3_TOKEN_STREAM ctnstream, pANTLR3_COMMON_TOKEN startToken, pANTLR3_COMMON_TOKEN stopToken, pANTLR3_EXCEPTION e) { // Use the supplied common tree node stream to get another tree from the factory // TODO: Look at creating the erronode as in Java, but this is complicated by the // need to track and free the memory allocated to it, so for now, we just // want something in the tree that isn't a NULL pointer. // return adaptor->createTypeText(adaptor, ANTLR3_TOKEN_INVALID, (pANTLR3_UINT8)"Tree Error Node"); } Am I out of luck here and only the error nodes the Java target produces would allow me to retrieve the text of the erroneous nodes?

    Read the article

  • How to preprocess text to do OCR error correction

    - by eaglefarm
    Here is what I'm trying to accomplish: I need to get a several large text files from a computer that is not networked and has no other output except a printer. I tried printing the text, then scanning the printout with OCR to recover the text on another computer but the OCR gets lots of errors (1 vs l, o vs 0, O vs D, etc). To solve this I am thinking of writing a program to process (annotate?) the text file, before printing it, so that the errors can be corrected from the text output of the OCR program. For example, for 1 (number one) vs l (letter L), I could change the text like this: sample inserting \nnn after characters that are frequently wrong in the OCR results: sampl\108e Then I can write another program to examine the file, looking for \nnn and check the character before the \nnn (where nnn is the ascii code in decimal) and fix it if necessary. Of course the program will have to recognize that the \nnn may have errors too but at least it knows that the nnn are digits and can easily correct them. I think I would add a CRC on each line so that any line that isn't corrected perfectly can be flagged as having a problem. Has anyone done anything like this? If there is an existing way of doing this I'd rather not reinvent the wheel. Or any suggestions for annotation format that would help solve this problem would be helpful too.

    Read the article

  • Producing Mini Dumps for _caught_ SEH exceptions in mixed code DLL

    - by Assaf Lavie
    I'm trying to use code similar to clrdump to create mini dumps in my managed process. This managed process invokes C++/CLI code which invokes some native C++ static lib code, wherein SEH exceptions may be thrown (e.g. the occasional access violation). C# WinForms -> C++/CLI DLL -> Static C++ Lib -> ACCESS VIOLATION Our policy is to produce mini dumps for all SEH exceptions (caught & uncaught) and then translate them to C++ exceptions to be handled by application code. This works for purely native processes just fine; but when the application is a C# application - not so much. The only way I see to produce dumps from SEH exceptions in a C# process is to not catch them - and then, as unhandled exceptions, use the Application.ThreadException handler to create a mini dump. The alternative is to let the CLR translate the SEH exception into a .Net exception and catch it (e.g. System.AccessViolationException) - but that would mean no dump is created, and information is lost (stack trace information in Exception isn't as rich as the mini dump). So how can I handle SEH exceptions by both creating a minidump and translating the exception into a .Net exception so that my application may try to recover?

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >