Search Results

Search found 1257 results on 51 pages for 'repositories'.

Page 47/51 | < Previous Page | 43 44 45 46 47 48 49 50 51  | Next Page >

  • Can't install wine (or ia32-libs) in Ubuntu 12.10 64 bit

    - by carestad
    As already pointed out here, people seems to have issues with installing wine in the latest version of Ubuntu. I'm suspecting this only happens with 64 bit users. For example, when trying to install wine, wine1.4, wine1.4:i386, wine1.5, wine1.5:i386, ia32-libs or ia32-libs:i386 with apt-get, I get a lot of dependency errors. Doing a sudo apt-get -f install doesn't seem to do the trick, neither does using aptitude. The errors I get is normally that the packages depend on some :i386 package, but installing those manually doesn't work either because they also have dependency issues (isn't APT supposed to do this automatically?!). I also downloaded CrossOver today and tried installing the .deb manually, but the dependency issues show up there as well. When running sudo apt-get -f install after trying to install the CrossOver .deb, apt-get wants to purge the following packages: ia32-crossover intel-gpu-tools libdrm-nouveau2 libgl1-mesa-dri libva-x11-1 ubuntu-desktop vlc xorg xserver-xorg-video-ati xserver-xorg-video-intel xserver-xorg-video-modesetting xserver-xorg-video-openchrome xserver-xorg-video-radeon xserver-xorg-video-vmware What I've tried so far (and didn't work): Installing synaptic, reloading my repositories, searching for ia32 and installing ia32-libs. Using Ubuntu Software Center to install Wine and ia32-libs. Using apt-get and aptitude to install all the differend varieties of the wine packages, both with and without the :i386 and -amd64 suffixes in package names. Disabling the universe and multiverse repos, run a sudo apt-get update and then re-enable them again. Boot a newly downloaded Ubuntu 12.10 x64 live USB and try to install all the different packages there. What I haven't tried (yet): Boot a newly downloaded Ubuntu 12.10 x32 image and try to install wine there (I'm just guessing that will work). Reinstall Ubuntu. Throw my computer out a window. wine alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: wine : Depends: wine1.5 but it is not going to be installed E: Unable to correct problems, you have held broken packages. wine-1.4 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4 (...) The following packages have unmet dependencies: wine1.4 : Depends: wine1.4-i386 (= 1.4.1-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.4:i386 alexander@cosmo:~$ LANGUAGE=en_US sudo apt-get install wine1.4:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.4:i386 : Depends: wine1.4-i386:i386 (= 1.4.1-0ubuntu1) Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. wine-1.5 alexander@cosmo:~$ sudo apt-get install wine1.5 (...) The following packages have unmet dependencies: wine1.5 : Depends: wine1.5-i386 (= 1.5.16-0ubuntu1) E: Unable to correct problems, you have held broken packages. wine-1.5:i386 alexander@cosmo:~$ sudo apt-get install wine1.5:i386 (...) The following packages have unmet dependencies: libaudio2:i386 : Depends: libxt6:i386 but it is not going to be installed libqtgui4:i386 : Depends: libsm6:i386 but it is not going to be installed libunity-webapps0 : Depends: unity-webapps-service but it is not going to be installed openssh-client : Depends: adduser (>= 3.10) but it is not going to be installed Depends: passwd ssh : Depends: openssh-server wine1.5:i386 : Depends: wine1.5-i386:i386 (= 1.5.16-0ubuntu1) but it is not going to be installed Depends: binfmt-support:i386 (>= 1.1.2) Depends: procps:i386 Recommends: cups-bsd:i386 Recommends: gnome-exe-thumbnailer:i386 but it is not installable or kde-runtime:i386 but it is not going to be installed Recommends: ttf-droid:i386 but it is not installable Recommends: ttf-liberation:i386 but it is not installable Recommends: ttf-mscorefonts-installer:i386 Recommends: ttf-umefont:i386 but it is not installable Recommends: ttf-unfonts-core:i386 but it is not installable Recommends: ttf-wqy-microhei:i386 but it is not installable Recommends: winbind:i386 Recommends: winetricks:i386 but it is not going to be installed Recommends: xdg-utils:i386 but it is not installable E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. ia32-libs alexander@cosmo:~$ sudo apt-get install ia32-libs (...) The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. ia32-libs:i386 alexander@cosmo:~$ sudo apt-get install ia32-libs:i386 (...) Package ia32-libs:i386 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: lib32z1 lib32ncurses5 lib32bz2-1.0 lib32asound2 E: Package 'ia32-libs:i386' has no installation candidate

    Read the article

  • How to build Open JavaFX for Android.

    - by PictureCo
    Here's a short recipe for baking JavaFX for Android dalvik. We will need just a few ingredients but each one requires special care. So let's get down to the business.  SourcesThe first ingredient is an open JavaFX repository. This should be piece of cake. As always there's a catch. You probably know that dalvik is jdk6 compatible  and also that certain APIs are missing comparing to good old java vm from Oracle.  Fortunately there is a repository which is a backport of regular OpenJFX to jdk7 and going from jdk7 to jdk6 is possible. The first thing to do is to clone or download the repository from https://bitbucket.org/narya/jfx78. Main page of the project says "It works in some cases" so we will presume that it will work in most cases As I've said dalvik vm misses some APIs which would lead to a build failures. To get them use another compatibility repository which is available on GitHub https://github.com/robovm/robovm-jfx78-compat. Download the zip and unzip sources into jfx78/modules/base.We need also a javafx binary stubs. Use jfxrt.jar from jdk8.The last thing to download are freetype sources from http://freetype.org. These will be necessary for native font rendering. Toolchain setup I have to point out that these instructions were tested only on linux. I suppose they will work with minimal changes also on Mac OS. I also presume that you were able to build open JavaFX. That means all tools like ant, gradle, gcc and jdk8 have been installed and are working all right. In addition to this you will need to download and install jdk7, Android SDK and Android NDK for native code compilation.  Installing all of them will take some time. Don't forget to put them in your path. export ANDROID_SDK=/opt/android-sdk-linux export ANDROID_NDK=/opt/android-ndk-r9b export JAVA_HOME=/opt/jdk1.7.0 export PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK FreetypeUnzip freetype release sources first. We will have to cross compile them for arm. Firstly we will create a standalone toolchain for cross compiling installed in ~/work/ndk-standalone-19. $ANDROID_NDK/build/tools/make-standalone-toolchain.sh  --platform=android-19 --install-dir=~/work/ndk-standalone-19 After the standalone toolchain has been created cross compile freetype with following script: export TOOLCHAIN=~/work/freetype/ndk-standalone-19 export PATH=$TOOLCHAIN/bin:$PATH export FREETYPE=`pwd` ./configure --host=arm-linux-androideabi --prefix=$FREETYPE/install --without-png --without-zlib --enable-shared sed -i 's/\-version\-info \$(version_info)/-avoid-version/' builds/unix/unix-cc.mk make make install It will compile and install freetype library into $FREETYPE/install. We will link to this install dir later on. It would be possible also to link openjfx font support dynamically against skia library available on Android which already contains freetype. It creates smaller result but can have compatibility problems. Patching Download patches javafx-android-compat.patch + android-tools.patch and patch jfx78 repository. I recommend to have look at patches. First one android-compat.patch updates openjfx build script, removes dependency on SharedSecret classes and updates LensLogger to remove dependency on jdk specific PlatformLogger. Second one android-tools.patch creates helper script in android-tools. The script helps to setup javaFX Android projects. Building Now is time to try the build. Run following script: JAVA_HOME=/opt/jdk1.7.0 JDK_HOME=/opt/jdk1.7.0 ANDROID_SDK=/opt/android-sdk-linux ANDROID_NDK=/opt/android-ndk-r9b PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK:$PATH gradle -PDEBUG -PDALVIK_VM=true -PBINARY_STUB=~/work/binary_stub/linux/rt/lib/ext/jfxrt.jar \ -PFREETYPE_DIR=~/work/freetype/install -PCOMPILE_TARGETS=android If everything went all right the output is in build/android-sdk Create first JavaFX Android project Use gradle script int android-tools. The script sets the project structure for you.   Following command creates Android HelloWorld project which links to a freshly built javafx runtime and to a HelloWorld application. NAME is a name of Android project. DIR where to create our first project. PACKAGE is package name required by Android. It has nothing to do with a packaging of javafx application. JFX_SDK points to our recently built runtime. JFX_APP points to dist directory of javafx application. (where all application jars sit) JFX_MAIN is fully qualified name of a main class. gradle -PDEBUG -PDIR=/home/user/work -PNAME=HelloWorld -PPACKAGE=com.helloworld \ -PJFX_SDK=/home/user/work/jfx78/build/android-sdk -PJFX_APP=/home/user/NetBeansProjects/HelloWorld/dist \ -PJFX_MAIN=com.helloworld.HelloWorld createProject Now cd to the created project and use it like any other android project. ant clean, debug, uninstall, installd will work. I haven't tried it from any IDE Eclipse nor Netbeans. Special thanks to Stefan Fuchs and Daniel Zwolenski for the repositories used in this blog post.

    Read the article

  • OPN Oracle ECM 10g R3 Implementation Boot Camp - (12-14/Abr/10)

    - by Claudia Costa
    É com entusiasmo que lhe anunciamos o bootcamp de Oracle ECM 10g R3 Implementation que irá realizar nos dias 12-14 de Abril  que abordará os tópicos abaixo descritos. Com o objectivo de ajudar os parceiros a desenvolver competências, a Oracle University e a Oracle Alliances&Channel, desenharam este bootcamp, compactando os conteúdos e reduzindo assim os custos. Preço por participante (3 dias) - 1.250 Eur + Iva  Oracle offers the most unified, usable enterprise content management platform in today's market. With centralized control across single or multiple repositories, common core functionality, and easily scalable content management capabilities, Oracle provides content management solutions for many content types and users-wherever they work in the enterprise.   The Oracle Enterprise Content Management (ECM) Implementation Boot Camp examines the fundamental concepts, techniques, and architecture of Oracle's ECM technologies. Join this training to learn how you can manage and maintain unstructured content   Target Audience:  The Oracle ECM Implementation Boot Camp is designed for architects, technical consultants, team/project leaders and functional consultants of our system integrator partners who want to ramp-up on ECM technology.   Contents:  The ECM Implementation Boot Camp is a three-day hands-on workshop, designed for Oracle Partners who are new to ECM, and will provide implementation instruction on the ECM technology offered by Oracle. The boot camp will: • Provide hands-on experience in implementing Oracle's truly unified, open and standard base ECM technology • Provide the strategic direction about Oracle's Fusion Middleware/Enterprise 2.0 and its role in composite application development • Expose broad set of Oracle's ECM technologies.   Objectives: The Oracle ECM Implementation Boot Camp is primarily focused on the Oracle's ECM offering to manage and maintain unstructured content and covers Universal Content Management (UCM), Image and Process Management (IPM), Universal Records Management (URM), and Information Rights Management (IRM):   Topics Covered • Introduction to Oracle UCM o UCM Overview o UCM Architecture Overview • Content Server and Document Management basics o Installation and Administration Skills § User and Security Admin § Configuration (metadata, DCLs, profiles, rules, etc.) § Workflow Admin § System Properties and Component Manager § Managing Subscriptions o Contributing Content § Browser form § WebDAV folder § Desktop Integration o Searching • Web Content Management o Site Studio • Universal Records Management • Information Right Management (IRM) • Image & Process Management (IPM) • Oracle Document Capture • Oracle eMail Archive Service. Labs • Content Server Installation • Use and Administration of Content Server • Introduction to Site Studio • Use and Administration of Records Manager Demo: The R&D Group and the New Patent Focus: Information Rights Management, Knowledge Management, Accounts Payable Image Automation, Imaging and Process Management Case Study Use Case 1: Enable City of Xalco to streamline internal processes by empowering city employees to quickly and efficiently manage and publish information on their employee intranet and eventually public Web site. Use Case 2: Help Acme & Co in archiving its goal is to become "paperless" by managing all of their company's business content in a central, Web-based repository. Acme's business content ranges from policies and procedures to Employee listings and marketing materials.   Agenda: Day 1 ·         ECM Overview & Content Server ·         ECM Overview ·         ECM Architecture and Installation ·         UCM and Digital Asset Management DEMO ·         Lab 1 - Content Server Installation ·         Lab 2 - Use and Administration of Content Server   Day 2 ·         Web Content Management ·         Lab 2 - Use and Administration of Content ·         Server (continued) ·         Introduction to Web Content Management ·         Lab 3 - Site Studio   Day 3 ·         URM/IRM/IPM ·         Introduction to Universal Records Management ·         Lab 4 - URM ·         Introduction to Information Rights Management ·         Information Rights Management DEMO ·         Introduction to Image and Process Management ·         Image and Process Management Demo ·         Oracle Document Capture ·         Oracle eMail Archive   Material needed for Bootcamp: This Boot camp requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware • RAM: 2GB RM minimum (1 GB RAM is not enough) • HDD: 15 GB free HDD space   Pre requistes: To ensure a valuable learning experience, participation in this boot camp requires completing the prerequisite courses and successfully passing the prerequisite assessment test that is mapped into the Oracle Enterprise Content Management Implementation Boot Camp guided learning path. At a minimum, participants with equivalent skills and background should review the guided learning path and successfully pass the prerequisite assessment test to ensure they possess the background necessary to benefit from participation in the boot Camp.   ---------------------------------------------------------------------   Para mais informações/inscrições, contacte: Mónica Pires  21 423 51 44 Horário e Local 9:30h - 12:30h e 14:00h - 17:00 ( 6 horas/dia )Oracle, Porto Salvo - Oeiras.

    Read the article

  • ASP.NET MVC: Converting business objects to select list items

    - by DigiMortal
    Some of our business classes are used to fill dropdown boxes or select lists. And often you have some base class for all your business classes. In this posting I will show you how to use base business class to write extension method that converts collection of business objects to ASP.NET MVC select list items without writing a lot of code. BusinessBase, BaseEntity and other base classes I prefer to have some base class for all my business classes so I can easily use them regardless of their type in contexts I need. NB! Some guys say that it is good idea to have base class for all your business classes and they also suggest you to have mappings done same way in database. Other guys say that it is good to have base class but you don’t have to have one master table in database that contains identities of all your business objects. It is up to you how and what you prefer to do but whatever you do – think and analyze first, please. :) To keep things maximally simple I will use very primitive base class in this example. This class has only Id property and that’s it. public class BaseEntity {     public virtual long Id { get; set; } } Now we have Id in base class and we have one more question to solve – how to better visualize our business objects? To users ID is not enough, they want something more informative. We can define some abstract property that all classes must implement. But there is also another option we can use – overriding ToString() method in our business classes. public class Product : BaseEntity {     public virtual string SKU { get; set; }     public virtual string Name { get; set; }       public override string ToString()     {         if (string.IsNullOrEmpty(Name))             return base.ToString();           return Name;     } } Although you can add more functionality and properties to your base class we are at point where we have what we needed: identity and human readable presentation of business objects. Writing list items converter Now we can write method that creates list items for us. public static class BaseEntityExtensions {            public static IEnumerable<SelectListItem> ToSelectListItems<T>         (this IList<T> baseEntities) where T : BaseEntity     {         return ToSelectListItems((IEnumerator<BaseEntity>)                    baseEntities.GetEnumerator());     }       public static IEnumerable<SelectListItem> ToSelectListItems         (this IEnumerator<BaseEntity> baseEntities)     {         var items = new HashSet<SelectListItem>();           while (baseEntities.MoveNext())         {             var item = new SelectListItem();             var entity = baseEntities.Current;               item.Value = entity.Id.ToString();             item.Text = entity.ToString();               items.Add(item);         }           return items;     } } You can see here to overloads of same method. One works with List<T> and the other with IEnumerator<BaseEntity>. Although mostly my repositories return IList<T> when querying data there are always situations where I can use more abstract types and interfaces. Using extension methods in code In your code you can use ToSelectListItems() extension methods like shown on following code fragment. ... var model = new MyFormModel(); model.Statuses = _myRepository.ListStatuses().ToSelectListItems(); ... You can call this method on all your business classes that extend your base entity. Wanna have some fun with this code? Write overload for extension method that accepts selected item ID.

    Read the article

  • Juniper Strategy, LLC is hiring SharePoint Developers&hellip;

    - by Mark Rackley
    Isn’t everybody these days? It seems as though there are definitely more jobs than qualified devs these days, but yes, we are looking for a few good devs to help round out our burgeoning SharePoint team. Juniper Strategy is located in the DC area, however we will consider remote devs for the right fit. This is your chance to get in on the ground floor of a bright company that truly “gets it” when it comes to SharePoint, Project Management, and Information Assurance. We need like-minded people who “get it”, enjoy it, and who are looking for more than just a job. We have government and commercial opportunities as well as our own internal product that has a bright future of its own. Our immediate needs are for SharePoint .NET developers, but feel free to submit your resume for us to keep on file as it looks as though we’ll need several people in the coming months. Please email us your resume and salary requirements to [email protected] Below are our official job postings. Thanks for stopping by, we look forward to  hearing from you. Senior SharePoint .NET Developer Senior developer will focus on design and coding of custom, end-to-end business process solutions within the SharePoint framework. Senior developer with the ability to serve as a senior developer/mentor and manage day-to-day development tasks. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. For selected development path, prepare technical specification and build the solution. Assist project manager with defining development task schedule and level-of-effort. Lead technical solution deployment. Job Requirements Minimum of 7 years experience in agile development, with at least 3 years of SharePoint-related development experience (SPS, SharePoint 2007/2010, WSS2-4). Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Experience with using multiple data sources/repositories for database CRUD activities, including relational databases, SAP, Oracle e-Business. Experience with designing and deploying performance-based solutions in SharePoint for business processes that involve a very large number of records. Experience designing dynamic dashboards and mashups with data from multiple sources (internal to SharePoint as well as from external sources). Experience designing custom forms to facilitate user data entry, both with and without leveraging Forms Services. Experience building custom web part solutions. Experience with designing custom solutions for processing underlying business logic requirements including, but not limited to, SQL stored procedures, C#/ASP.Net workflows/event handlers (including timer jobs) to support multi-tiered decision trees and associated computations. Ability to create complex solution packages for deployment (e.g., feature-stapled site definitions). Must have impeccable communication skills, both written and verbal. Seeking a "tinkerer"; proactive with a thirst for knowledge (and a sense of humor). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. Applicants must pass a skills assessment test. MCP/MCTS or comparable certification preferred. Salary & Travel Negotiable SharePoint Project Lead Define project task schedule, work breakdown structure and level-of-effort. Serve as principal liaison to the customer to manage deliverables and expectations. Day-to-day project and team management, including preparation and maintenance of project plans, budgets, and status reports. Prepare technical briefings and presentation decks, provide briefs to C-level stakeholders. Work with business consultants and clients to gather requirements to prepare business functional specifications. Analyze and recommend technical/development alternative paths based on business functional specifications. The SharePoint Project Lead will be working with SharePoint architects and system owners to perform requirements/gap analysis and develop the underlying functional specifications. Once we have functional specifications as close to "final" as possible, the Project Lead will be responsible for preparation of the associated technical specification/development blueprint, along with assistance in preparing IV&V/test plan materials with support from other team members. This person will also be responsible for day-to-day management of "developers", but is also expected to engage in development directly as needed.  Job Requirements Minimum 8 years of technology project management across the software development life-cycle, with a minimum of 3 years of project management relating specifically to SharePoint (SPS 2003, SharePoint2007/2010) and/or Project Server. Thorough understanding of and demonstrated experience in development under the SharePoint Object Model, with focus on the WSS 3.0 foundation (MOSS 2007 Standard/Enterprise, Project Server 2007). Ability to interact and collaborate effectively with team members and stakeholders of different skill sets, personalities and needs. General "development" skill set required is a fundamental understanding of MOSS 2007 Enterprise, SP1/SP2, from the top-level of skinning to the core of the SharePoint object model. Impeccable communication skills, both written and verbal, and a sense of humor are required. The projects will require being at a client site at least 50% of the time in Washington DC (NW DC) and Maryland (near Suitland). A US Citizen is required, and need to be able to pass NAC/E-Verify. An active Secret clearance is preferred. PMP certification, PgMP preferred. Salary & Travel Negotiable

    Read the article

  • Back Up to Tape the Way You Shop For Groceries

    - by rickramsey
    Imagine if this was how you shopped for groceries: From the end of the aisle sprint to the point where you reach the ketchup. Pull a bottle from the shelf and yell at the top of your lungs, “Got it!” Sprint back to the end of the aisle. Start again and sprint down the same aisle to the mustard, pull a bottle from the shelf and again yell for the whole store to hear, “Got it!” Sprint back to the end of the aisle. Repeat this procedure for every item you need in the aisle. Proceed to the next aisle and follow the same steps for the list of items you need from that aisle. Sounds ridiculous, doesn’t it? Not only is it horribly inefficient, it’s exhausting and can lead to wear out failures on your grocery cart, or worse, yourself. This is essentially how NetApp and some other applications write NDMP backups to tape. In the analogy, the ketchup and mustard are the files to be written, yelling “Got it!” is the equivalent of a sync mark at the end of a file, and the sprint back to the end of an aisle is the process most commonly called a “backhitch” where the drive has to back up on a tape to start writing again. Writing to tape in this way results in very slow tape drive performance and imposes unnecessary wear on the tape drive and the media, especially when writing small files. The good news is not all tape drives behave this way when writing small files. Unlike midrange LTO drives, Oracle’s StorageTek T10000D tape drive is designed to handle this scenario efficiently. The difference between the two drive types is that the T10000D drive gives you the ability to write files in a NetApp NDMP backup environment the way you would normally shop for groceries. With grocery shopping, you essentially stream through aisles picking up items as you go, and then after checking out, yell, “Got it!”, though you might do that last step silently. With the T10000D, it has a feature called the Tape Application Accelerator, which prevents the drive from having to stop after each file is written to notify NetApp or another application that the write was successful. When enabled in the T10000D tape drive, Tape Application Accelerator causes the tape drive to respond to tape mark and file sync commands differently than when disabled: A tape mark received by the tape drive is treated as a buffered tape mark. A file sync received by the tape drive is treated as a no op command. Since buffered tape marks and no op commands do not cause the tape drive to empty the contents of its buffer to tape and backhitch, the data is written to tape in significantly less time. Oracle has emulated NetApp environments with a number of different file sizes and found the following when comparing the T10000D with the Tape Application Accelerator enabled versus LTO6 tape drives. Notice how the T10000D is not only monumentally faster, but also remarkably consistent? In addition, the writing of the 50 GB of files is done without a single backhitch. The LTO6 drive, meanwhile, will perform as many as 3,800 backhitches! At the end of writing the entire set of files, the T10000D tape drive reports back to the application, in this case NetApp, that the write was successful via a tape mark. So if the Tape Application Accelerator dramatically improves performance and reliability, why wouldn’t you always have it enabled? The reason is because tape drive buffers are meant to be just temporary data repositories so in the event of a power loss, there could be data loss in certain environments for the files that resided in the buffer. Fortunately, we do have best practices depending on your environment to avoid this from happening. I highly recommend reading Maximizing Tape Performance with StorageTek T10000 Tape Drives (pdf) to decide which best practice is right for you. The white paper also digs deeper into the benefits of the Tape Application Accelerator. The white paper is free, and after downloading it you can decide for yourself whether you want to yell “Got it!” out loud or just silently to yourself. Customer Advisory Panel One final link: Oracle has started up a Customer Advisory Panel program to collect feedback from customers on their current experiences with Oracle products, as well as desires for future product development. If you would like to participate in the program, go to this link at oracle.com. photo taken on Idaho's Sacajewea Historic Biway by Rick Ramsey - Brian Zents Follow OTN on Blog | Facebook | Twitter | YouTube

    Read the article

  • Oracle’s Web Experience Management

    - by Christie Flanagan
    Today’s guest post on Oracle’s Web Experience Management comes from a member of our WebCenter Evangelist team, Noël Jaffré, a Principal Technologist based in France.Oracle’s Web Experience Management (WEM) solution enables organizations to optimize the online channel for driving marketing and customer experience management success. It empowers business users to manage the web presence and create rich and engaging online experiences for customers and prospects. Oracle's WEM platform provides a framework to simplify the integration of Oracle, third-party and custom-built applications. This framework essentially allows the creation and integration of applications using one single business interface called the WEM interface. It includes the following: Single sign-on access control for all integrated applications using the Central Authentication Service (CAS) component. A single centralized administration window for user, role, and native applications management including site management. Community server management, gadget server management as well as management for partner integrated technologies. A Representational State Transfer (REST) API for accessing WebCenter Sites data. REST services are supported on both Oracle WebCenter Sites and Oracle WebCenter Sites Satellite Server to leverage the satellite server cache. All REST requests are cached for web consuming applications as well for the high performance delivery of native applications on the mobile channel. Oracle WebCenter Sites’ Web Experience Management environment enables organizations to deliver a compelling online experience to customers by simplifying the deployment and management of sophisticated and engaging websites. The WebCenter Sites platform automates the entire process of managing web content including: Authoring:  Business users can easily contribute and manage web content in real-time, with intuitive interfaces and drag-and-drop content authoring and layout capabilities designed for the non-technical user. Contextual Content Targeting: Marketers are empowered to create and manage targeted campaigns with relevant recommendations and promotions based on the context of the session of the visitor such as his or her navigation history, user profile, language, location or other information shared during the visitor session. Content Publishing and Deployment: It offers advanced multi-site management capabilities for departmental or regional sites, as well as strong multi-lingual and multi-locale content management. The remote satellite server caching infrastructure provides high-performance, distributed caching, tuned to deliver high-volume, targeted and multi-lingual sites. Analytics and Optimization: Business users and marketers have the ability to measure the effectiveness of their online content and campaigns at a granular level. Editors and marketers can immediately determine whether a given article or promotion is relevant to a particular customer segment. User-generated Content: Marketers can enable blogs, comments, rating and reviews on the website.  All comments and reviews posted to the website can be moderated from the administrator interface either manually or automatically using filters, whitelists, blacklists or community based moderation. Personalized Gadget Dashboards:  Site managers can deploy gadgets, small applications using web content, individually or as part of dashboards containing multiple gadgets.  These gadget dashboards enable site visitors to create their own “MyPage” on a given site where they can select and customize the gadgets that the site administrator has made available.  Any gadget that conforms to the iGoogle/OpenSocial standard can be made available to site visitors, or they can be created within the WEM interface. Oracle's WEM platform also provides a unique environment for the delivery of a rich, multichannel online experience for site visitors through its advanced management modules for mobile. With Oracle’s WEM solution, it’s easy to control branding and deliver a consistent message while repurposing web content for publication to mobile devices, kiosks and much more. This distinctive approach provides: HTML5 Delivery: HTML5 delivery which includes native support for adaptive design that responds to the user’s computer screen resolution and orientation. The approach is less driven by the particular hardware and more driven by the user’s interactions with the device. In other words, this approach takes both the screen interactions (either cursor or touch) and screen sizes and orientation into consideration. A Unique Native Mobile Extension Environment for Contributors: From the WEM interface, a contributor can directly manage their mobile channel, using the tooling already in place for driving the traditional web presence. This includes the mobile presentation, as well as mobile insite editing, drag and drop page layout, and in-context recommendations and personalization. Optimized REST APIs for High Performance Content Delivery on Native Mobile Device Applications: WebCenter Sites’ REST API uses the underlying HTTP methods (GET, POST, PUT, DELETE) to interact with resources. Resources support two types of input and output formats -- XML and JSON. REST calls are customizable to optimize the interactions between the content repositories and the client applications. Caching is essential to decrease network loads and improve overall reliability and usability of the applications and user interactions. REST results are cached through the highly efficient Oracle WebCenter Sites caching architecture.

    Read the article

  • dpkg unsatisfied dependencies, now apt-get wants to remove whole system

    - by Bruno Finger
    firstly, I'm sorry for my terminal output in portuguese, but I guess it is still understandable. I am using Ubuntu GNOME 14.04 and I tried to update the GNOME Online Accounts packages by downloading the following .deb files from packages.ubuntu.com for the Ubuntu 14.10 version: libgoa-backend-1.0-dev_3.12.4-1_amd64.deb libgoa-backend-1.0-1_3.12.4-1_amd64.deb libgoa-1.0-dev_3.12.4-1_amd64.deb libgoa-1.0-0b_3.12.4-1_amd64.deb gnome-online-accounts_3.12.4-1_amd64.deb gir1.2-goa-1.0_3.12.4-1_amd64.deb After downloading them in the same folder, I run the command sudo dpkg -i *.deb, but it didn't install the packages, instead it showed errors due to packages which them depend doesn't meet the required version (and Ubuntu have no way to install them since they are not in this version's repositories). So now every time I want to install anything through apt-get, Ubuntu tells me to run apt-get -f install to fix the errors. This is the list of packages it needs to install/uninstall/update: $ sudo apt-get -f install Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Corrigindo dependências... Pronto Os seguintes pacotes foram instalados automaticamente e já não são necessários: # THESE PACKAGES HAVE BEEN PREVIOUSLY INSTALLED AND ARE NO LONGER NECESSARY account-plugin-windows-live gir1.2-gweather-3.0 libatk-bridge2.0-dev libatk1.0-dev libcairo-script-interpreter2 libcairo2-dev libexpat1-dev libfontconfig1-dev libfreetype6-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgtk-3-dev libharfbuzz-dev libharfbuzz-gobject0 libice-dev libpango1.0-dev libpcre3-dev libpcrecpp0 libpixman-1-dev libpng12-dev libpthread-stubs0-dev librest-dev libsm-dev libsoup2.4-dev libwayland-dev libx11-dev libx11-doc libxau-dev libxcb-render0-dev libxcb-shm0-dev libxcb1-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft-dev libxi-dev libxinerama-dev libxkbcommon-dev libxml2-dev libxrandr-dev libxrender-dev pkg-config signon-plugin-password x11proto-composite-dev x11proto-core-dev x11proto-damage-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xorg-sgml-doctools xtrans-dev zlib1g-dev Utilize 'apt-get autoremove' para os remover. Os pacotes extra a seguir serão instalados: # THE FOLLOWING PACKAGES WILL BE INSTALLED debhelper dh-apparmor libatk-bridge2.0-dev libatk1.0-dev libcairo-script-interpreter2 libcairo2-dev libept1.4.12 libexpat1-dev libfontconfig1-dev libfreetype6-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgtk-3-dev libharfbuzz-dev libharfbuzz-gobject0 libice-dev libmail-sendmail-perl libpango1.0-dev libpcre3-dev libpcrecpp0 libpixman-1-dev libpng12-dev libpthread-stubs0-dev librest-dev libsm-dev libsoup2.4-dev libwayland-dev libx11-dev libx11-doc libxau-dev libxcb-render0-dev libxcb-shm0-dev libxcb1-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft-dev libxi-dev libxinerama-dev libxkbcommon-dev libxml2-dev libxrandr-dev libxrender-dev pkg-config po-debconf x11proto-composite-dev x11proto-core-dev x11proto-damage-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xorg-sgml-doctools xtrans-dev zlib1g-dev Pacotes sugeridos: dh-make apparmor-easyprof libcairo2-doc libglib2.0-doc libgtk-3-doc libice-doc libpango1.0-doc imagemagick libsm-doc libsoup2.4-doc libxcb-doc libxext-doc libmail-box-perl Os pacotes a seguir serão REMOVIDOS: # THE FOLLOWING PACKAGES WILL BE REMOVED account-plugin-aim account-plugin-jabber account-plugin-salut account-plugin-yahoo empathy evolution evolution-data-server evolution-data-server-online-accounts evolution-indicator evolution-plugins gdm gir1.2-gdata-0.0 gir1.2-goa-1.0 gir1.2-zpj-0.0 gnome-contacts gnome-control-center gnome-documents gnome-online-accounts gnome-online-miners gnome-shell gnome-shell-extension-weather gnome-shell-extensions grilo-plugins-0.2 gvfs-backends-goa libevolution libfolks-eds25 libgdata13 libgoa-1.0-0b libgoa-1.0-dev libgoa-backend-1.0-1 libgoa-backend-1.0-dev libzapojit-0.0-0 mcp-account-manager-uoa nautilus-sendto-empathy ubuntu-gnome-desktop Os NOVOS pacotes a seguir serão instalados: # THE NEW FOLLOWING PACKAGES WILL BE INSTALLED debhelper dh-apparmor libatk-bridge2.0-dev libatk1.0-dev libcairo-script-interpreter2 libcairo2-dev libept1.4.12 libexpat1-dev libfontconfig1-dev libfreetype6-dev libgdk-pixbuf2.0-dev libglib2.0-dev libgtk-3-dev libharfbuzz-dev libharfbuzz-gobject0 libice-dev libmail-sendmail-perl libpango1.0-dev libpcre3-dev libpcrecpp0 libpixman-1-dev libpng12-dev libpthread-stubs0-dev librest-dev libsm-dev libsoup2.4-dev libwayland-dev libx11-dev libx11-doc libxau-dev libxcb-render0-dev libxcb-shm0-dev libxcb1-dev libxcomposite-dev libxcursor-dev libxdamage-dev libxdmcp-dev libxext-dev libxfixes-dev libxft-dev libxi-dev libxinerama-dev libxkbcommon-dev libxml2-dev libxrandr-dev libxrender-dev pkg-config po-debconf x11proto-composite-dev x11proto-core-dev x11proto-damage-dev x11proto-fixes-dev x11proto-input-dev x11proto-kb-dev x11proto-randr-dev x11proto-render-dev x11proto-xext-dev x11proto-xinerama-dev xorg-sgml-doctools xtrans-dev zlib1g-dev 0 pacotes atualizados, 61 pacotes novos instalados, 35 a serem removidos e 22 não atualizados. 7 pacotes não totalmente instalados ou removidos. É preciso baixar 12,0 MB de arquivos. Depois desta operação, 25,0 MB adicionais de espaço em disco serão usados. Você quer continuar? [S/n] Along packages needed to be removed are even gdm. This is 100% sure to make the system useless. What can I do to fix this issue? I don't care if I can't install the new version of goa anymore.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • Announcing the New Windows Azure Web Sites Shared Scaling Tier

    - by Clint Edmonson
    Windows Azure Web Sites has added a new pricing tier that will solve the #1 blocker for the web development community. The shared tier now supports custom domain names mapped to shared-instance web sites. This post will outline the plan changes and elaborate on how the new pricing model makes Windows Azure Web Sites an even richer option for web development shops of all sizes. Free Shared Reserved # of Sites 10 100 100 Egress 165MB/Day 5GB/Month Included 5GB/Month Included Storage 1GB 1GB 10GB Throttling CPU/Memory/Egress CPU/Memory Unlimited Price Free $.02/hr per site, per instance $.08/hr per core Setting the Stage In June, we released the first public preview of Windows Azure Web Sites, which gave web developers a great platform on which to get web sites running using their web development framework of choice. PHP, Node.js, classic ASP, and ASP.NET developers can all utilize the Windows Azure platform to create and launch their web sites. Likewise, these developers have a series of data storage options using Windows Azure SQL Databases, MySQL, or Windows Azure Storage. The Windows Azure Web Sites free offer enabled startups to get their site up and running on Windows Azure with a minimal investment, and with multiple deployment and continuous integration features such as Git, Team Foundation Services, FTP, and Web Deploy.  The response to the Windows Azure Web Sites offer has been overwhelmingly positive. Since the addition of the service on June 12th, tens of thousands of web sites have been deployed to Windows Azure and the volume of adoption is increasing every week. Preview Feedback In spite of the growth and success of the product, the community has had questions about features lacking in the free preview offer. The main question web developers asked regarding Windows Azure Web Sites relates to the lack of the free offer’s support for domain name mapping. During the preview launch period, customer feedback made it obvious that the lack of domain name mapping support was an area of concern. We’re happy to announce that this #1 request has been delivered as a feature of the new shared plan. New Shared Tier Portal Features In the screen shot below, the “Scale” tab in the portal shows the new tiers – Free, Shared, and Reserved – and gives the user the ability to quickly move any of their free web sites into the shared tier. With a single mouse-click, the user can move their site into the shared tier. Once a site has been moved into the shared tier, a new Manage Domains button appears in the bottom action bar of the Windows Azure Portal giving site owners the ability to manage their domain names for a shared site. This button brings up the domain-management dialog, which can be used to enter in a specific domain name that will be mapped to the Windows Azure Web Site. Shared Tier Benefits Startups and large web agencies will both benefit from this plan change. Here are a few examples of scenarios which fit the new pricing model: Startups no longer have to select the reserved plan to map domain names to their sites. Instead, they can use the free option to develop their sites and choose on a site-by-site basis which sites they elect to move into the shared plan, paying only for the sites that are finished and ready to be domain-mapped Agencies who manage dozens of sites will realize a lower cost of ownership over the long term by moving their sites into reserved mode. Once multi-site companies reach a certain price point in the shared tier, it is much more cost-effective to move sites to a reserved tier.  Long-term, it’s easy to see how the new Windows Azure Web Sites shared pricing tier makes Windows Azure Web Sites it a great choice for both startups and agency customers, as it enables rapid growth and upgrades while keeping the cost to a minimum. Large agencies will be able to have all of their sites in their own instances, and startups will have the capability to scale up to multiple-shared instances for minimal cost and eventually move to reserved instances without worrying about the need to incur continually additional costs. Customers can feel confident they have the power of the Microsoft Windows Azure brand and our world-class support, at prices competitive in the market. Plus, in addition to realizing the cost savings, they’ll have the whole family of Windows Azure features available. Continuous Deployment from GitHub and CodePlex Along with this new announcement are two other exciting new features. I’m proud to announce that web developers can now publish their web sites directly from CodePlex or GitHub.com repositories. Once connections are established between these services and your web sites, Windows Azure will automatically be notified every time a check-in occurs. This will then trigger Windows Azure to pull the source and compile/deploy the new version of your app to your web site automatically. Walk-through videos on how to perform these functions are below: Publishing to an Azure Web Site from CodePlex Publishing to an Azure Web Site from GitHub.com These changes, as well as the enhancements to the reserved plan model, make Windows Azure Web Sites a truly competitive hosting option. It’s never been easier or cheaper for a web developer to get up and running. Check out the free Windows Azure web site offering and see for yourself. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Modifying the Default Shipped Template

    - by The Old Toxophilist
    Having installed your Exalogic Virtual environment by default you have a single template which can be used to create your vServers. Although this template is suitable for creating simple test or development vServers it is recommended that you look at creating your own custom vServers that match the environment you wish to build and deploy. Therefore this Tea Time Snippet will take you through the simple process of modifying the standard template. Before You Start To edit the template you will need the Oracle ModifyJeos Utility which can be downloaded from the eDelivery Site. Once the ModifyJeos Utility has been downloaded we can install the rpms onto either an existing vServer or one of the Control vServers. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Alternatively you can install the modify jeos packages on a none Exalogic OEL installation or a VirtualBox image. If you are doing this, assuming OEL 5u8, you will need the following rpms. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-el5u2-xvm-jeos-1.0.1-5.el5.i386.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Base Template If you have installed the modify onto a vServer running on the Exalogic then simply mount the /export/common/images from the ZFS storage and you will be able to find the el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz (or similar depending which version you have) template file. Alternatively the latest can be downloaded from the eDelivery Site. Now we have the Template tgz we will need the extract it as follows: tar -zxvf  el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). This directory should be renamed to something more meaning full that indicates what we have done to the template and then the Simple name / name in the vm.cfg editted for the same reason. Modifying the Template Resizing Root File System By default the shipped template has a root size of 4 GB which will leave a vServer created from it running at 90% full on the root disk. We can simply resize the template by executing the following: modifyjeos -f System.img -T <New Size MB>) For example to imcrease the default 4 GB to 40 GB we would execute: modifyjeos -f System.img -T 40960) Resizing Swap We can modify the size of the swap space within a template by executing the following: modifyjeos -f System.img -S <New Size MB>) For example to increase the swap from the default 512 MB to 4 GB we would execute: modifyjeos -f System.img -S 4096) Changing RPMs Adding RPMs To add RPMs using modifyjeos, complete the following steps: Add the names of the new RPMs in a list file, such as addrpms.lst. In this file, you should list each new RPM in a separate line. Ensure that all of the new RPMs are in a single directory, such as rpms. Run the following command to add the new RPMs: modifyjeos -f System.img -a <path_to_addrpms.lst> -m <path_to_rpms> -nogpg Where <path_to_addrpms.lst> is the path to the location of the addrpms.lst file, and <path_to_rpms> is the path to the directory that contains the RPMs. The -nogpg option eliminates signature check on the RPMs. Removing RPMs To remove RPM s using modifyjeos, complete the following steps: Add the names of the RPMs (the ones you want to remove) in a list file, such as removerpms.lst. In this file, you should list each RPM in a separate line. The Oracle Exalogic Elastic Cloud Administrator's Guide provides a list of all RPMs that must not be removed from the vServer. Run the following command to remove the RPMs: modifyjeos -f System.img -e <path_to_removerpms.lst> Where <path_to_removerpms.lst> is the path to the location of the removerpms.lst file. Mounting the System.img For all other modifications that are not supported by the modifyjeos command (adding you custom yum repositories, pre configuring NTP, modify default NFSv4 Nobody functionality, etc) we can mount the System.img and access it directly. To facititate quick and easy mounting/unmounting of the System.img I have put together the simple scripts below. MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG UnmountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Assume the $LOOP & $SYSTEMIMG exist from a previous run on the MountSystemImg.sh umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP Packaging the Template Once you have finished modifying the template it can be simply repackaged and then imported into EMOC as described in "Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template". To do this we will simply cd to the directory above that containing the modified files and execute the following: tar -zcvf <New Template Directory> <New Template Name>.tgz The resulting.tgz file can be copied to the images directory on the ZFS and uploadd using the IB network. This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Soft lockup after upgrade - cannot install from live CD

    - by nbm
    I dual-boot MacIntel Core 2 duo. nVidia graphics. Ran upgrade from ubuntu 13.10 to 14.04 (64 bit). On restart ran into {numbers} Bug: soft lockup - CPU#0 stuck for 22s! [swapper/0:1] Tried loading earlier kernel: same problem Tried re-installing ubuntu from a liveCD that has worked in the past: version 13.04. Same problem. Tried re-partitioning hard drive using Mac OS X disk utility and then installing ubuntu 14.04LTS from liveCD. Same problem. Not possible to verify liveCD disk (creates same "soft lockup" bug.) Tried installing from the liveCD with version 13.04 that I know works (that's how I got Ubuntu on this machine in the first place.) Same problem. I know this is not a hardware problem as OS X works just fine, I am using it right now on the same machine. I have been using various versions of Ubuntu for 2 years. Things I cannot do: Open a terminal Verify CD image Start ubuntu from CD (same soft lockup problem) This problem is similar to some other questions, none of which have been satisfactorily answered: Ubuntu 14.04 soft lockup on Vostro 3500 Cannot do fresh install of Ubuntu 13.04 while booting from DVD: "soft lockup" bug Live CD stalls when installing Ubuntu 13.10 UPDATE 6/11/14: Following some much-appreciated advice from bain (see below) I burned a 12.04LTS disk and started with kernel parameters: noapic, no1apic, acpi=off, nomodeset, elevator=deadline, and clocksource=jiffies. With all of these parameters I was able to load the 12.04LTS CD ("Try without installing"). It worked fine. However, as soon as I tried to install Ubuntu from the CD, my wired ethernet (eth0) connection would hang. There are already various askubuntu questions and bug reports about this problem, none of which had answers for me. (E.g., dhclient eth0 does nothing, none of the various reset commands does anything, manually setting IP &etc does nothing. I could reliably kill the ethernet connection by clicking "install ubuntu" every single time.) I could go ahead and install 12.04 without an internet connection, but the install would freeze after mostly completing (I tried several times.) There were some relevant error messages in the details of the install output script that, IIRC, had to do with searching for missing files and not being able to access eth0 (internet) to get them. To be honest I gave up at that point and I'm not sure I wrote those down. If I find some notes I will post them. At this point I no longer have Ubuntu on my system. I wiped the partitions and am using exclusively OS X. I am leaving this question in case it helps anyone else with similar problems. I love open source and I love Linux, and the next machine I get I will probably just build from Arch. At the moment I miss repositories and a lot of other things about Ubuntu, but the OS X terminal is 'nix, I can pretty much use all the open source apps I like, and while I am not a fan of the Apple software it gets the job done for me. Unlike Ubuntu, which can't even install. I realize this isn't necessarily a place for a soapbox speech, but when I first installed 12.04 several years ago there were already people in the community complaining that Canonical was going too "commercial". But I loved it. Several years later and all I've seen is Canonical adding more not-so-useful bells and whistles to Ubuntu while continually failing to fix basic problems on upgrades. With a dual-boot (and sometimes triple-boot) system it always took me some tweaking to get an upgrade to work, and to some extent that is okay. But at this point I feel like Canonical ought to just put a price tag on Ubuntu. All I see is more commercialism and advertising and product tie-ins, and ongoing problems do not get fixed. I am a big fan of open-source, not-for profit enterprise. I am also a big fan of for-profit enterprise, which certainly has its place and usefulness. I am not a fan of companies who pretend to be in favor of open source but really are just out to make a buck, and IMNSHO that is what Canonical has become. This is a great community and I wish you all the best, but my next install of Linux will not be Ubuntu.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Building a Repository Pattern against an EF 5 EDMX Model - Part 1

    - by Juan
    I am part of a year long plus project that is re-writing an existing application for a client.  We have decided to develop the project using Visual Studio 2012 and .NET 4.5.  The project will be using a number of technologies and patterns to include Entity Framework 5, WCF Services, and WPF for the client UI.This is my attempt at documenting some of the successes and failures that I will be coming across in the development of the application.In building the data access layer we have to access a database that has already been designed by a dedicated dba. The dba insists on using Stored Procedures which has made the use of EF a little more difficult.  He will not allow direct table access but we did manage to get him to allow us to use Views.  Since EF 5 does not have good support to do Code First with Stored Procedures, my option was to create a model (EDMX) against the existing database views.   I then had to go select each entity and map the Insert/Update/Delete functions to their respective stored procedure. The next step after I had completed mapping the stored procedures to the entities in the EDMX model was to figure out how to build a generic repository that would work well with Entity Framework 5.  After reading the blog posts below, I adopted much of their code with some changes to allow for the use of Ninject for dependency injection.http://www.tcscblog.com/2012/06/22/entity-framework-generic-repository/ http://www.tugberkugurlu.com/archive/generic-repository-pattern-entity-framework-asp-net-mvc-and-unit-testing-triangle IRepository.cs public interface IRepository : IDisposable where T : class { void Add(T entity); void Update(T entity, int id); T GetById(object key); IQueryable Query(Expression> predicate); IQueryable GetAll(); int SaveChanges(); int SaveChanges(bool validateEntities); } GenericRepository.cs public abstract class GenericRepository : IRepository where T : class { public abstract void Add(T entity); public abstract void Update(T entity, int id); public abstract T GetById(object key); public abstract IQueryable Query(Expression> predicate); public abstract IQueryable GetAll(); public int SaveChanges() { return SaveChanges(true); } public abstract int SaveChanges(bool validateEntities); public abstract void Dispose(); } One of the issues I ran into was trying to do an update. I kept receiving errors so I posted a question on Stack Overflow http://stackoverflow.com/questions/12585664/an-object-with-the-same-key-already-exists-in-the-objectstatemanager-the-object and came up with the following hack. If someone has a better way, please let me know. DbContextRepository.cs public class DbContextRepository : GenericRepository where T : class { protected DbContext Context; protected DbSet DbSet; public DbContextRepository(DbContext context) { if (context == null) throw new ArgumentException("context"); Context = context; DbSet = Context.Set(); } public override void Add(T entity) { if (entity == null) throw new ArgumentException("Cannot add a null entity."); DbSet.Add(entity); } public override void Update(T entity, int id) { if (entity == null) throw new ArgumentException("Cannot update a null entity."); var entry = Context.Entry(entity); if (entry.State == EntityState.Detached) { var attachedEntity = DbSet.Find(id); // Need to have access to key if (attachedEntity != null) { var attachedEntry = Context.Entry(attachedEntity); attachedEntry.CurrentValues.SetValues(entity); } else { entry.State = EntityState.Modified; // This should attach entity } } } public override T GetById(object key) { return DbSet.Find(key); } public override IQueryable Query(Expression> predicate) { return DbSet.Where(predicate); } public override IQueryable GetAll() { return Context.Set(); } public override int SaveChanges(bool validateEntities) { Context.Configuration.ValidateOnSaveEnabled = validateEntities; return Context.SaveChanges(); } #region IDisposable implementation public override void Dispose() { if (Context != null) { Context.Dispose(); GC.SuppressFinalize(this); } } #endregion IDisposable implementation } At this point I am able to start creating individual repositories that are needed and add a Unit of Work.  Stay tuned for the next installment in my path to creating a Repository Pattern against EF5.

    Read the article

  • How to make this design closer to proper DDD?

    - by Seralize
    I've read about DDD for days now and need help with this sample design. All the rules of DDD make me very confused to how I'm supposed to build anything at all when domain objects are not allowed to show methods to the application layer; where else to orchestrate behaviour? Repositories are not allowed to be injected into entities and entities themselves must thus work on state. Then an entity needs to know something else from the domain, but other entity objects are not allowed to be injected either? Some of these things makes sense to me but some don't. I've yet to find good examples of how to build a whole feature as every example is about Orders and Products, repeating the other examples over and over. I learn best by reading examples and have tried to build a feature using the information I've gained about DDD this far. I need your help to point out what I do wrong and how to fix it, most preferably with code as "I would not recomment doing X and Y" is very hard to understand in a context where everything is just vaguely defined already. If I can't inject an entity into another it would be easier to see how to do it properly. In my example there are users and moderators. A moderator can ban users, but with a business rule: only 3 per day. I did an attempt at setting up a class diagram to show the relationships (code below): interface iUser { public function getUserId(); public function getUsername(); } class User implements iUser { protected $_id; protected $_username; public function __construct(UserId $user_id, Username $username) { $this->_id = $user_id; $this->_username = $username; } public function getUserId() { return $this->_id; } public function getUsername() { return $this->_username; } } class Moderator extends User { protected $_ban_count; protected $_last_ban_date; public function __construct(UserBanCount $ban_count, SimpleDate $last_ban_date) { $this->_ban_count = $ban_count; $this->_last_ban_date = $last_ban_date; } public function banUser(iUser &$user, iBannedUser &$banned_user) { if (! $this->_isAllowedToBan()) { throw new DomainException('You are not allowed to ban more users today.'); } if (date('d.m.Y') != $this->_last_ban_date->getValue()) { $this->_ban_count = 0; } $this->_ban_count++; $date_banned = date('d.m.Y'); $expiration_date = date('d.m.Y', strtotime('+1 week')); $banned_user->add($user->getUserId(), new SimpleDate($date_banned), new SimpleDate($expiration_date)); } protected function _isAllowedToBan() { if ($this->_ban_count >= 3 AND date('d.m.Y') == $this->_last_ban_date->getValue()) { return false; } return true; } } interface iBannedUser { public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date); public function remove(); } class BannedUser implements iBannedUser { protected $_user_id; protected $_date_banned; protected $_expiration_date; public function __construct(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function add(UserId $user_id, SimpleDate $date_banned, SimpleDate $expiration_date) { $this->_user_id = $user_id; $this->_date_banned = $date_banned; $this->_expiration_date = $expiration_date; } public function remove() { $this->_user_id = ''; $this->_date_banned = ''; $this->_expiration_date = ''; } } // Gathers objects $user_repo = new UserRepository(); $evil_user = $user_repo->findById(123); $moderator_repo = new ModeratorRepository(); $moderator = $moderator_repo->findById(1337); $banned_user_factory = new BannedUserFactory(); $banned_user = $banned_user_factory->build(); // Performs ban $moderator->banUser($evil_user, $banned_user); // Saves objects to database $user_repo->store($evil_user); $moderator_repo->store($moderator); $banned_user_repo = new BannedUserRepository(); $banned_user_repo->store($banned_user); Should the User entitity have a 'is_banned' field which can be checked with $user->isBanned();? How to remove a ban? I have no idea.

    Read the article

  • Can't get gitosis and ssh to play nice on cygwin

    - by Noel Kennedy
    I have followed this guide to setting up gitosis on a windows 2003 server via cygwin. I have now got to a point where it largely works. I can clone, pull and push. The problem I am having is that I think I have not got the ssh bit right at all. When I connect via msysgit from machines and accounts where I have not created or uploaded ssh keys it works. Every time I clone, pull or push I get a password challenge for the 'git' user running on the server but basically I can execute git commands. When I connect with users with an ssh key in the ~/.ssh folder, I don't get the password challange and instead I get a permissions failure: DEBUG:gitosis.serve.main:Got command "git-upload-pack '/cris.git'" DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'writeable' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' DEBUG:gitosis.access.haveAccess:Access check for 'teamcity@hhit24808' as 'readonly' on 'cris.git'... DEBUG:gitosis.access.haveAccess:Stripping .git suffix from 'cris.git', new value 'cris' ERROR:gitosis.serve.main:Repository read access denied fatal: The remote end hung up unexpectedly I have uploaded the public rsa key into the key_dir folder. Here is my conf file: [gitosis] loglevel = DEBUG [group gitosis-admin] writable = gitosis-admin members = myemail@mydomain [group cris-developers] members = myemail@mydomain TeamCity@HHIT24808 writable = cris If it matters, I have generated a key without a passphrase as I believe this is necessary to enable ssh for automated scripts. When I use keys with a passphrase, I get challanged for the phrase but then get the same permissions problem. I have tried 'writable' and 'writeable' for permissions. Help!! Update 1: When I try to clone a non-existant repo, I get the same error message, co-incidence? Update 2: Wierd, I've got one machine and one login working. It seems to be something to do with the syntax for addressing git over ssh. This now works on one machine for one login: git clone git@servername:cris.git The same command fails for a user on another machine without an uploaded ssh key. But this command works (after being challanged for git@servername's password) git clone git@servername:/home/git/repositories/cris.git neither command works on a 2nd login whose ssh key has been uploaded

    Read the article

  • Bazaar newbie question about repository structures

    - by esc1729
    I want to use Bazaar on Windows XP for web-development and related tasks. Most of the files are edited locally and then transferred via FTP to the server. Just now the repository sits on my local workstation. Later on it should be shared locally with some co-workers. Perhaps we will use a local Linux server as a centralized repository, but this structure is not decided for now. But first I need to understand the impacts of the different repository setups, which I do not at all. Using Bazaar-Explorer on Windows XP I’ve created a ‘shared tree repository’ from the option list of the init-dialogue in some location dev-filter/. Bazaar Explorer tells me: Created repository with treeless branches at F:/bzr.local/dev-filter Created branch at F:/bzr.local/dev-filter/trunk Created working tree at F:/bzr.local/dev-filter/work OK so far. Now I move a bunch of files into the work directory and add and commit them as Rev 1 ‘Start Revision’. Then I work on some of these files and commit them again as Rev 2. Here my confusion starts. Shouldn’t both revisions go into the trunk? The trunk is still empty, beside the .bzr directory which only holds some management information. If I delete my working directory, which I have tried during these first experiments, everything is gone. There’s obviously no hidden storage of those files. OK. Perhaps I need to push it into the trunk? This does not work either. Entering the work/ directory and initializing the ‘push’ to the trunk, Bazaar-Explorer tells me No new revisions to push. So what? This looks like a severe conceptual misunderstanding about what should happen on my side. Edit, 2010-02-03: Some conclusions What I learned meanwhile is this: I think I should switch to the command line until I really understand what’s going on, at least for creating the repositories and branches. Bazaar Explorer introduces a new level of abstraction which I only can handle if I understand the level beneath One of the secrets of working with Bazaar at least for me is to understand those .bzr directories, their particular properties and states when created with ‘bzr init’, ‘bzr init-repository’, ‘bzr branch’ etc. in all their variants and how they are plumped together. While there’s a whole chapter of ‘Organizing your workspace’ in the Bazaar User Guide, it’s more or less workflow oriented. The manual contains a lot of directory structures for the given examples. What I would prefer beside this and have not (or only rudimentary) found so far is some graphical representation of those ‘Lego like’ .bzr building blocks which create the linking of all the parts. So I started to invent some simple notation while working through the examples and looking into the .bzr directories to document what information is stored there, where does it come from, how and to what is it linked, is it complete or shared, etc. Erich Schreiber

    Read the article

  • SVN naming convention: repository, branches, tags

    - by LookitsPuck
    Hey all! Just curious what your naming conventions are for the following: Repository name Branches Tags Right now, we're employing the following standards with SVN, but would like to improve on it: Each project has its own repository Each repository has a set of directories: tags, branches, trunk Tags are immutable copies of the the tree (release, beta, rc, etc.) Branches are typically feature branches Trunk is ongoing development (quick additions, bug fixes, etc.) Now, with that said, I'm curious how everyone is not only handling the naming of their repositories, but also their tags and branches. For example, do you employ a camel case structure for the project name? So, if your project is something like Backyard Baseball for Youngins, how do you handle that? backyardBaseballForYoungins backyard_baseball_for_youngins BackyardBaseballForYoungins backyardbaseballforyoungins That seems rather trivial, but it's a question. If you're going with the feature branch paradigm, how do you name your feature branches? After the feature itself in plain English? Some sort of versioning scheme? I.e. say you want to add functionality to the Backyard Baseball app that allows users to add their own statistics. What would you call your branch? {repoName}/branches/user-add-statistics {repoName}/branches/userAddStatistics {repoName}/branches/user_add_statistics etc. Or: {repoName}/branches/1.1.0.1 If you go the version route, how do you correlate the version numbers? It seems that feature branches wouldn't benefit much from a versioning schema, being that 1 developer could be working on the "user add statistics" functionality, and another developer could be working on the "admin add statistics" functionality. How are these do branch versions named? Are they better off being: {repoName}/branches/1.1.0.1 - user add statistics {repoName}/branches/1.1.0.2 - admin add statistics And once they're merged into the trunk, the trunk might increment appropriately? Tags seem like they'd benefit the most from version numbers. With that being said, how are you correlating the versions for your project (whether it be trunk, branch, tag, etc.) with SVN? I.e. how do you, as the developer, know that 1.1.1 has admin add statistics, and user add statistics functionality? How are these descriptive and linked? It'd make sense for tags to have release notes in each tag since they're immutable. But, yeah, what are your SVN policies going forward?

    Read the article

  • Using ASP.NET MVC, Linq To SQL, and StructureMap causing DataContext to cache data

    - by Dragn1821
    I'll start by telling my project setup: ASP.NET MVC 1.0 StructureMap 2.6.1 VB I've created a bootstrapper class shown here: Imports StructureMap Imports DCS.Data Imports DCS.Services Public Class BootStrapper Public Shared Sub ConfigureStructureMap() ObjectFactory.Initialize(AddressOf StructureMapRegistry) End Sub Private Shared Sub StructureMapRegistry(ByVal x As IInitializationExpression) x.AddRegistry(New MainRegistry()) x.AddRegistry(New DataRegistry()) x.AddRegistry(New ServiceRegistry()) x.Scan(AddressOf StructureMapScanner) End Sub Private Shared Sub StructureMapScanner(ByVal scanner As StructureMap.Graph.IAssemblyScanner) scanner.Assembly("DCS") scanner.Assembly("DCS.Data") scanner.Assembly("DCS.Services") scanner.WithDefaultConventions() End Sub End Class I've created a controller factory shown here: Imports System.Web.Mvc Imports StructureMap Public Class StructureMapControllerFactory Inherits DefaultControllerFactory Protected Overrides Function GetControllerInstance(ByVal controllerType As System.Type) As System.Web.Mvc.IController Return ObjectFactory.GetInstance(controllerType) End Function End Class I've modified the Global.asax.vb as shown here: ... Sub Application_Start() RegisterRoutes(RouteTable.Routes) 'StructureMap BootStrapper.ConfigureStructureMap() ControllerBuilder.Current.SetControllerFactory(New StructureMapControllerFactory()) End Sub ... I've added a Structure Map registry file to each of my three projects: DCS, DCS.Data, and DCS.Services. Here is the DCS.Data registry: Imports StructureMap.Configuration.DSL Public Class DataRegistry Inherits Registry Public Sub New() 'Data Connections. [For](Of DCSDataContext)() _ .HybridHttpOrThreadLocalScoped _ .Use(New DCSDataContext()) 'Repositories. [For](Of IShiftRepository)() _ .Use(Of ShiftRepository)() [For](Of IMachineRepository)() _ .Use(Of MachineRepository)() [For](Of IShiftSummaryRepository)() _ .Use(Of ShiftSummaryRepository)() [For](Of IOperatorRepository)() _ .Use(Of OperatorRepository)() [For](Of IShiftSummaryJobRepository)() _ .Use(Of ShiftSummaryJobRepository)() End Sub End Class Everything works great as far as loading the dependecies, but I'm having problems with the DCSDataContext class that was genereated by Linq2SQL Classes. I have a form that posts to a details page (/Summary/Details), which loads in some data from SQL. I then have a button that opens a dialog box in JQuery, which populates the dialog from a request to (/Operator/Modify). On the dialog box, the form has a combo box and an OK button that lets the user change the operator's name. Upon clicking OK, the form is posted to (/Operator/Modify) and sent through the service and repository layers of my program and updates the record in the database. Then, the RedirectToAction is called to send the user back to the details page (/Summary/Details) where there is a call to pull the data from SQL again, updating the details view. Everything works great, except the details view does not show the new operator that was selected. I can step through the code and see the DCSDataContext class being accessed to update the operator (which does actually change the database record), but when the DCSDataContext is accessed to reload the details objects, it pulls in the old value. I'm guessing that StructureMap is causing not only the DCSDataContext class but also the data to be cached? I have also tried adding the following to the Global.asax, but it just ends up crashing the program telling me the DCSDataContext has been disposed... Private Sub MvcApplication_EndRequest(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.EndRequest StructureMap.ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects() End Sub Can someone please help?

    Read the article

  • SVN directories not showing up in localhost when using WAMP

    - by JsusSalv
    Hi: I recently installed WAMP for actual local use. I've worked on live development servers but now am working on localhost. I've managed to get multiple virtual hosts setup on my WAMP/Vista 64-bit box but am having difficulty with directories pulled from SVN. I have four vhosts setup. Two work well and they are not tied to any SVN just yet. I'm also using TortoiseSVN in case it makes any difference. However, the other projects are coming from SVN repositories. When I view these two projects I get the following error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, admin@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. The way I setup the vhosts is as follows: httpd.conf # Multiple Virtual Hosts <VirtualHost 127.0.0.1> ServerName localhost DocumentRoot "C:/wamp/www/" </VirtualHost> <VirtualHost 127.0.1.0> ServerName testone.local DocumentRoot "C:/wamp/www/root/projectone/" </VirtualHost> <VirtualHost 127.0.2.0> ServerName testtwo.local DocumentRoot "C:/wamp/www/root/projecttwo/" </VirtualHost> <VirtualHost 127.0.3.0> ServerName testthree.local DocumentRoot "C:/wamp/www/root/projectthree/" </VirtualHost> <VirtualHost 127.0.3.1> ServerName testfour.local DocumentRoot "C:/wamp/www/root/projectfour/" </VirtualHost> And here's the 'hosts' file: # Localhost 127.0.0.1 localhost ::1 localhost # Project One 127.0.1.0 testone.local # Project Two 127.0.2.0 testtwo.local # Project Three 127.0.3.0 testthree.local # Project Four 127.0.3.1 testfour.local Everything works just fine. So if you want to tell me I'm doing something wrong then by all means point out a few things. But as it stands, it works and I'm content using different IPs and/or named-based vhosts. The problem comes in not being able to see the directories and files in the projects that are tied to an SVN. Whenever I visit http://testxxxx.local I get the error message at the top of this post. Please provide some suggestions. Thank you!

    Read the article

  • Unable to verify body hash for DKIM

    - by Joshua
    I'm writing a C# DKIM validator and have come across a problem that I cannot solve. Right now I am working on calculating the body hash, as described in Section 3.7 Computing the Message Hashes. I am working with emails that I have dumped using a modified version of EdgeTransportAsyncLogging sample in the Exchange 2010 Transport Agent SDK. Instead of converting the emails when saving, it just opens a file based on the MessageID and dumps the raw data to disk. I am able to successfully compute the body hash of the sample email provided in Section A.2 using the following code: SHA256Managed hasher = new SHA256Managed(); ASCIIEncoding asciiEncoding = new ASCIIEncoding(); string rawFullMessage = File.ReadAllText(@"C:\Repositories\Sample-A.2.txt"); string headerDelimiter = "\r\n\r\n"; int headerEnd = rawFullMessage.IndexOf(headerDelimiter); string header = rawFullMessage.Substring(0, headerEnd); string body = rawFullMessage.Substring(headerEnd + headerDelimiter.Length); byte[] bodyBytes = asciiEncoding.GetBytes(body); byte[] bodyHash = hasher.ComputeHash(bodyBytes); string bodyBase64 = Convert.ToBase64String(bodyHash); string expectedBase64 = "2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8="; Console.WriteLine("Expected hash: {1}{0}Computed hash: {2}{0}Are equal: {3}", Environment.NewLine, expectedBase64, bodyBase64, expectedBase64 == bodyBase64); The output from the above code is: Expected hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Computed hash: 2jUSOH9NhtVGCQWNr9BrIAPreKQjO6Sn7XIkfJVOzv8= Are equal: True Now, most emails come across with the c=relaxed/relaxed setting, which requires you to do some work on the body and header before hashing and verifying. And while I was working on it (failing to get it to work) I finally came across a message with c=simple/simple which means that you process the whole body as is minus any empty CRLF at the end of the body. (Really, the rules for Body Canonicalization are quite ... simple.) Here is the real DKIM email with a signature using the simple algorithm (with only unneeded headers cleaned up). Now, using the above code and updating the expectedBase64 hash I get the following results: Expected hash: VnGg12/s7xH3BraeN5LiiN+I2Ul/db5/jZYYgt4wEIw= Computed hash: ISNNtgnFZxmW6iuey/3Qql5u6nflKPTke4sMXWMxNUw= Are equal: False The expected hash is the value from the bh= field of the DKIM-Signature header. Now, the file used in the second test is a direct raw output from the Exchange 2010 Transport Agent. If so inclined, you can view the modified EdgeTransportLogging.txt. At this point, no matter how I modify the second email, changing the start position or number of CRLF at the end of the file I cannot get the files to match. What worries me is that I have been unable to validate any body hash so far (simple or relaxed) and that it may not be feasible to process DKIM through Exchange 2010.

    Read the article

  • Git/Mercurial (hg) opinion

    - by Richard
    First, let me say I'm not a professional programmer, but an engineer who had a need for it and had to learn. I was always working alone, so it was just me and my seven splitted personalities ... and we worked okey as a team :) Most of my stuff is done in C/Fortran/Matlab and so far I've been learning git to manage it all. However, although I've had no unsolvable problems with it, I've never been "that" happy with it ... for everything I cannot do, I hae to look up a book. And, for some time now I've been hearning a lot of good stuff about hg. Now, a coleague of mine will have to work with me on a project (I almost feel sorry for him) and he's started learning hg (says he likes it more), and I'm considering the switch myself. We work almost exclusivly on Windows platform (although I manage relatively ok using unix tools and things that come from that part of the world). So, I was wondering, in a described scenario, what problems could I expect with the switch. I heard that hg is rather more user friendly towards windows users, regarding the user interfaces. How does it handle repositories ? Does it create them the same way as git does (just one subdirectory in a working directory) and can I just copy the whole project directory (including git repo) and just carry them somewhere with no extra thinking ? (I really liked that when I was choosing over git/svn). Are there any good books on it that you can recommend (something like Pro Git, only for Hg). What are good ways to implement hg into Visual Studio/GVim for Windows, or into Windows Explorer so I can work relatively easily (I would like to avoid using the command line for everything regarding it, like in git shell). Is there something else I should be aware of (please, on this don't point me to other questions ... they just give me a ton of info, and I'm not sure what is it that I should take as important, and what to disregard). I'm trying to cut some time, since I cannot spend all that time relearning hg, like I did for git. I've also heard git is c project, while mercurial is python ... is there any noticeable difference in speed. git was pretty speedy ... will I encounter some waiting while working. Notice: All my projects are of let's say, middle size ... mostly numerical simulations ... 10-15000 lines (medium size?)

    Read the article

  • Maven building for GoogleAppEngine, forced to include JDO libraries?

    - by James.Elsey
    Hi, I'm trying to build my application for GoogleAppEngine using maven. I've added the following to my pom which should "enhance" my classes after building, as suggested on the DataNucleus documentation <plugin> <groupId>org.datanucleus</groupId> <artifactId>maven-datanucleus-plugin</artifactId> <version>1.1.4</version> <configuration> <log4jConfiguration>${basedir}/log4j.properties</log4jConfiguration> <verbose>true</verbose> </configuration> <executions> <execution> <phase>process-classes</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> </plugin> According to the documentation on GoogleAppEngine, you have the choice to use JDO or JPA, I've chosen to use JPA since I have used it in the past. When I try to build my project (before I upload to GAE) using mvn clean package I get the following output [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to resolve artifact. Missing: ---------- 1) javax.jdo:jdo2-api:jar:2.3-ec Try downloading the file manually from the project website. Then, install it using the command: mvn install:install-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file Alternatively, if you host your own repository you can deploy the file there: mvn deploy:deploy-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id] Path to dependency: 1) org.datanucleus:maven-datanucleus-plugin:maven-plugin:1.1.4 2) javax.jdo:jdo2-api:jar:2.3-ec ---------- 1 required artifact is missing. for artifact: org.datanucleus:maven-datanucleus-plugin:maven-plugin:1.1.4 from the specified remote repositories: __jpp_repo__ (file:///usr/share/maven2/repository), DN_M2_Repo (http://www.datanucleus.org/downloads/maven2/), central (http://repo1.maven.org/maven2) [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Sat Apr 03 16:02:39 BST 2010 [INFO] Final Memory: 31M/258M [INFO] ------------------------------------------------------------------------ Any ideas why I should get such an error? I've searched through my entire source code and I'm not referencing JDO anywhere, so unless the app engine libraries require it, I'm not sure why I get this message.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51  | Next Page >