Search Results

Search found 2307 results on 93 pages for 'jboss portal'.

Page 80/93 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • iScsiPrt error event ID 5

    - by AZee
    Event Log: "Failed to setup initiator portal. Error status is given in the dump data." This is being recorded every 3/100's of a second. We are using MS iSCSI Initiator on Windows Server 2003, Dell 2970 w/4GB (PAE). I am sure that this was configured by Dell initially. I have no idea what changes or mods were made since the company installed this machine until now. (I'm a new User so the lovely and vibrant screen images had to be removed. They were quite pretty and I am sure you would have been very moved and appreciative of them.) It appears that everything is installed correctly and the 5TB bound volume is accessible but I have never worked with iScsi before so I plead total ignorance. In searching I have found this to be a fairly sparce and bland documented subject. I'd like two things... First, to get rid of the error msg being logged. MS says it can be ignored if everything is working but it chews up resources logging it and I don't feel comfortable about any errors on my servers. I want to correct whatever is causing this problem. Secondly, being totally green to this, I would like to confirm that the setup is optimized and we are taking advantage of all features available. Although there are 3 NIC's in this machine it appears that the initiator is only configured for the Broadcom BMC5708C NetXtreme II on our 10.90.1.#, the other 2 NICS are 1GB on the 192.168.0.#. Would additional targets improve performance? If someone who is experienced in configuring the Microsoft iScsi Initiator can help I would really appreciate it since, as I mentioned, everything I have come across has not been of any value at all. Thanks! ~AZ

    Read the article

  • HP DL160G6 bios update fails

    - by Bojo
    I tried to update the BIOS from my HP DL160G6's. Unfortunately the Windows update degraded my BIOS version from 243 to 237. But when I try to upgrade both servers to version the update fails. First I got a warning like: CMOS Layout difference between System ROM and ROM file has detected. AFU recommand (sic) adding /C commands of your original input commands. Press "A" to accept AFU's recommendation. Press "F" to keep original input commands. And the update did not do anything. But now I get a message like: Reading flash ........ done Bootblock checksum ... ok Module checksums ..... bad Error: BIOS checksum error And the update stops. I tried some commands form this page: http://www.ami.com/support/downloads/txt/AFU_README.TXT But I don't try to much, the servers are still booting. Does anyone know how to update my servers to BIOS version 245? I used this version http://h20565.www2.hp.com/portal/site/hpsc/template.PAGE/public/psi/swdDetails/?sp4ts.oid=3884344&spf_p.tpst=swdMain&spf_p.prp_swdMain=wsrp-navigationalState%3Didx%253D%257CswItem%253DMTX_7bd12651ab954fdcb0d7ee164a%257CswEnvOID%253D54%257CitemLocale%253D%257CswLang%253D%257Cmode%253D%257Caction%253DdriverDocument&javax.portlet.begCacheTok=com.vignette.cachetoken&javax.portlet.endCacheTok=com.vignette.cachetoken and created a bootable USB stick sith HPQUSB.exe

    Read the article

  • VMRC equivalent for Hyper-V?

    - by Ian Boyd
    VMRC was the client tool used to connect to virtual machines running on Virtual Server. Upgrading to Windows Server 2008 R2 with the Hyper-V role, i need a way for people to be able to use the virtual machines. Note: not all virtual machines will have network connectivity not all virtual machines will be running Windows some people needing to connect to a virtual machine will be running Windows XP Hyper-V manager, allowing management of the hyper-v server, is less desirable (since it allows management of the hyper-v server (and doesn't work on all operating systems)) What is the Windows Server 2008 R2 equivalent of VMRC; to "vnc" to a virtual server? Update: i think Tatas was suggesting Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0 (?): Which requires SQL Server IIS Installing those would unfortunately violate our Windows Server 2008 R2 license. i might be looking at the wrong product link, since commenter said there is a version that doesn't require "System Center". Update 2: The Windows Server 2008 R2 running HyperV is being licensed with the understanding that it only be used to host HyperV. From the [Windows Server 2008 R2 Licensing FAQ][4]: Q. If I have one license for Windows Server 2008 R2 Standard and want to run it in a virtual operating system environment, can I continue running it in the physical operating system environment? A. Yes, with Windows Server 2008 R2 Standard, you may run one instance in the physical operating system environment and one instance in the virtual operating system environment; however, the instance running in the physical operating system environment may be used only to run hardware virtualization software, provide hardware virtualization services, or to run software to manage and service operating system environments on the licensed server. This is why i'm weary about installing IIS or SQL Server.

    Read the article

  • Replace text with spaces in MySQL

    - by javipas
    I'm trying to do a global replace of search in my database, which has a lot of articles with a double carriage return because of this code: <p> </p> I'd like to replace this in my WordPress blog so instead of that appears... nothing, and so I can delete the CR. I've tried this on my database UPDATE wp_posts set post_content = replace (post_content,'<p> </p>',''); but didn't work. Why? Do I have to add special thinks to consider the space between the <p>and the</p>? Mmm. Good points, both Jon Angliss and Wim. Jon, as you could have guessed, the database shows no entries with that text string. So there's something going on inside the post_content field. Wim, the famous   was replaced previously, but there are still hundreds of posts that for some reason have something different between the p and the /p tags. I've done a search of one of the posts with this error: mysql> select * from wp_posts where post_title like '%3DVisionLive%'; And looking in the wp_content field, this is a little piece of the post: Phil Eisler, responsable de la divisi?n 3D Vision.?</p> <p>?</p> <p>Este portal ser? por tanto No spanish tilde (accent) shown on the terminal, and instead of an space there's a quotation mark between the p and the /p tags. I've tried to replace <p>?</p>, but again, no results. There's some character (or several) there, but I don't know how to discover that. Maybe it's the character set of my terminal, but I've accessed the database from phpmyadmin and in that case there's a space character between the p and the /p. Weird.

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • BlazeDS first response is VERY SLOW

    - by chibban
    Hi everyone, I have a very strange an annoying problem: I have an appliaction written in Flex 3, with BlazeDS 3.2 and Java in the backend. I'm actually using a portal (liferay) to display portlets that contain Flex movies. When I hit a refresh button on my page, all the Flex movies send a RemoteObject request to the server (using BlazeDS), which should go to java classes and invoke a method (standard BlazeDS usage I 'm guessing). I'm experiencing VERY slow response (14 minutes) on the first hit, while the following hits are much faster. I've enabled the BlazeDS logging (logging level="All") and I also have debug prints coming from my java classes. I also use the "showBusyCursor" attribute for the RemoteObject - so I can see indication of the request being sent from the flex movie. Here is what I see: I hit the refresh button Each movie invokes a RemoteObject request I see a busy sign - in all the movies I see nothing in the log - no BlazeDS prints and no Java prints Wait 14 minutes or so I see BlazeDS prints followed by Java prints I see data populating my flex movies. The really weird thing is that I have the same "application" installed in 4 different computers (on my laptop and in 3 other unix machines), 3 of these installations work well (good response times) and only 1 has the issue I'm describing. I've tried many things, but everything failed. I'd be really happy to get some advice on this. Thanks

    Read the article

  • Virtual Lan on the Cloud -- Help Confirm my understanding?

    - by marfarma
    [Note: Tried to post this over at ServerFault, but I don't have enough 'points' for more than one link. Powers that be, move this question over there.] Please give this a quick read and let me know if I'm missing something before I start trying to make this work. I'm not a systems admin professional, and I'd hate to end up banging my head into the wall if I can avoid it. Goals: Create a 'road-warrior' capable star shaped virtual LAN for consultants who spend the majority of their time on client sites, and who's firm has no physical network or servers. Enable CIFS access to a cloud-server based installation of Alfresco Allow Eventual implementation of some form of single-sign-on ( OpenLDAP server ) access to Alfresco and other server applications implemented in the future Given: All Servers will live in the public internet cloud (Rackspace Cloud Servers) OpenVPN Server will be a Linux disto, probably Ubuntu 9.x, installed on same server as Alfresco (at least to start) Staff will access server applications and resources from client sites, hotels, trains, planes, coffee shops or their homes over various ISP, using their company laptops or personal home desktops. Based on my Research thus far, to accomplish this, I'll need: OpenVPN with Bridging Enabled to create a star shaped "virtual" LAN http://openvpn.net/index.php/open-source/documentation/miscellaneous/76-ethernet-bridging.html A Road Warrior Network Configuration, as described in this Shorewall article (lower down the page) http://www.shorewall.net/OPENVPN.html Configure bridge addressesing (probably DHCP) http://openvpn.net/index.php/open-source/faq.html#bridge-addressing Configure CIFS / Samba to accept VPN IP address http://serverfault.com/questions/137933/howto-access-samba-share-over-vpn-tunnel Set up Client software, with keys configured for access (potentially through a OpenVPN-Sa client portal) http://www.openvpn.net/index.php/access-server/download-openvpn-as/221-installation-overview.html

    Read the article

  • How can I fix my vista PCs screen resolution and refresh rate

    - by Antony Scott
    I have a media PC running media portal hooked up to my HDTV via HDMI. The TV is a couple of years old now, so only supports 1080i, which is 1920x1080@25Hz. I've got it connected to my PC via a HDMI compatible AV receiver. If I power up the amp (wait for it to boot fully) followed by the TV| and finally the PC, all is well and I get a picture. If I deviate from that sequence, or don't wait for the amp to book up fully, or even switch the amp to another video input (for example, my PS3). The PC sees this and defaults the screen resolution/refresh rate to 1920x1080@60Hz. So, I end up with a blank screen. To fix this I have to use UltraVNC from a PC and change the refresh rate back to 25Hz. So, is there a way to turn off that auto detection, or to manually define what resolution/refresh rates the monitor can do. I'm using the on-board Radeon 3200 video and do not have any of the AMD software installed as it seems to cause problems with video playback. So, I'm looking for a native vista fix, or possible some 3rd party software.

    Read the article

  • Java EE at JavaOne - A Few Picks from a Very Rich Line-up

    - by Janice J. Heiss
    A rich and diverse set of sessions cast a spotlight on Java EE at this year’s JavaOne, ranging from the popular Web Framework Smackdown, to Java EE 6 and Spring, to sessions exploring Java EE 7, and one on the implications of HTML5. Some of the world’s best EE architects and developers will be sharing their insight and expertise. If only I could be at ten places at once!BOF4149 - Web Framework Smackdown 2012    Markus Eisele - Principal IT Architect, msg systems ag    Graeme Rocher - Senior Staff Engineer, VMware    James Ward - Developer Evangelist, Heroku    Ed Burns - Consulting Member of Technical Staff, Oracle    Santiago Pericasgeertsen - Software Engineer, Oracle* Monday, Oct 1, 8:30 PM - 9:15 PM - Parc 55 - Cyril Magnin II/III Much has changed since the first Web framework smackdown, at JavaOne 2005. Or has it? The 2012 edition of this popular panel discussion surveys the current landscape of Web UI frameworks for the Java platform. The 2005 edition featured JSF, Webwork, Struts, Tapestry, and Wicket. The 2012 edition features representatives of the current crop of frameworks, with a special emphasis on frameworks that leverage HTML5 and thin-server architecture. Java Champion Markus Eisele leads the lively discussion with panelists James Ward (Play), Graeme Rocher (Grails), Edward Burns (JSF) and Santiago Pericasgeertsen (Avatar).CON6430 - Java EE and Spring Framework Panel Discussion    Richard Hightower - Developer, InfoQ    Bert Ertman - Fellow, Luminis    Gordon Dickens - Technical Architect, IT101, Inc.    Chris Beams - Senior Technical Staff, VMware    Arun Gupta - Technology Evangelist, Oracle* Tuesday, Oct 2, 10:00 AM - 11:00 AM - Parc 55 - Cyril Magnin II/III In the age of Java EE 6 and Spring 3, enterprise Java developers have many architectural choices, including Java EE 6 and Spring, but which one is right for your project? Many of us have heard the debate and seen the flame wars—it’s a topic with passionate community members, and it’s a vibrant debate. If you are looking for some level-headed discussion, grounded in real experience, by developers who have tried both, then come join this discussion. InfoQ’s Java editors moderate the discussion, and they are joined by independent consultants and representatives from both Java EE and VMWare/SpringSource.BOF4213 - Meet the Java EE 7 Specification Leads   Linda Demichiel - Consulting Member of Technical Staff, Oracle   Bill Shannon - Architect, Oracle* Tuesday, Oct 2, 5:30 PM - 6:15 PM – Parc 55 - Cyril Magnin II/III This is your chance to meet face-to-face with the engineers who are developing the next version of the Java EE platform. In this session, the specification leads for the leading technologies that are part of the Java EE 7 platform discuss new and upcoming features and answer your questions. Come prepared with your questions, your feedback, and your suggestions for new features in Java EE 7 and beyond.CON10656 - JavaEE.Next(): Java EE 7, 8, and Beyond    Ian Robinson - IBM Distinguished Engineer, IBM    Mark Little - JBoss CTO, NA    Scott Ferguson - Developer, Caucho Technology    Cameron Purdy - VP Development, Oracle*Wednesday, Oct 3, 4:30 PM - 5:30 PM - Parc 55 - Cyril Magnin II/IIIIn this session, hear from a distinguished panel of industry and open source luminaries regarding where they believe the Java EE community is headed, starting with Java EE 7. The focus of Java EE 7 and 8 is mostly on the cloud, specifically aiming to bring platform as a service (PaaS) providers and application developers together so that portable applications can be deployed on any cloud infrastructure and reap all its benefits in terms of scalability, elasticity, multitenancy, and so on. Most importantly, Java EE will leverage the modularization work in the underlying Java SE platform. Java EE will, of course, also update itself for trends such as HTML5, caching, NoSQL, ployglot programming, map/reduce, JSON, REST, and improvements to existing core APIs.CON7001 - HTML5 WebSocket and Java    Danny Coward - Java, Oracle*Wednesday, Oct 3, 4:30 PM - 5:30 PM - Parc 55 - Cyril Magnin IThe family of HTML5 technologies has pushed the pendulum away from rich client technologies and toward ever-more-capable Web clients running on today’s browsers. In particular, WebSocket brings new opportunities for efficient peer-to-peer communication, providing the basis for a new generation of interactive and “live” Web applications. This session examines the efforts under way to support WebSocket in the Java programming model, from its base-level integration in the Java Servlet and Java EE containers to a new, easy-to-use API and toolset that are destined to become part of the standard Java platform.

    Read the article

  • How to copy a bunch of pages? Is there a 3rd party tool?

    - by unknown (yahoo)
    (I asked the following question at the DNN forum, and also at snowcovered. Nobody knew of such an obvious time-saver being for sale. I'm posting here in case anybody knows of a freeware module that might do this.) By "groups of dnn pages", I mean pages that form a hierarchy (not necessary a hierarchy that is headed with a page at the same level as the Home page.) I know that I can copy web pages, one by one, using the admin login via the web-based dnn interface. But, I'd prefer a script or wizard, of some sort (that runs scripts behind the scenes) that can allow me to 1) specify a web page that I want to copy (along with the hierarchy of pages under it) 2) specify the names and titles of the new top-level pages 3) specify whether the contained modules of the top-level page that I want to copy is to be : ( ) New ( ) Copy ( ) Reference (as in the web-based interface) 4) repeat 3) for each of the source pages in the hierarchy that I want to copy You might say that I am looking to do something similar to creating a portal web site based on a template, except that it's not an entirely new website - instead it's a section of the current web site. I might want to do this because I have an organization which is broken into chapters, and I want each chapter to have, say, it's own General Information page (which acts like it's home page), and underneath that, in it's hierarchy, a Contact Info page and an Events page. so: Home Page   General Information Page     Contact Info     Events -- Home Page   General Information Page     Contact Info     Events   General Information Page Kiwanis - Bloomfield     Contact Info     Events   General Information Page Kiwanis - Dayton     Contact Info     Events If I have 200 chapters, I certainly don't want to copy those 3 web pages using the web based interface, as that would take a long time. (And imagine if each chapter's new sub-website had 30 pages!) I just want to specify the parameters of a copy process, then press a button, and let the system do the rest.

    Read the article

  • Welcome to the ISV Migration Center (IMC) Team blog

    - by lukasz.romaszewski(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Welcome to the ISV Migration Center (IMC) Team blog.The IMC is a a team of senior Oracle technical consultants who's aim is to enable partners to rapidly and successfully adopt and implement Oracle's latest technology.  The IMC consultants are trained and equipped to deliver leading-edge, enterprise-quality technology solutions. This blog has been created to serve as an  information exchange platform on Oracle Fusion Middleware and Database products so you will find how-tos, articles, demos and other technical resources.  We will also publish our upcoming workshops, webcasts and seminars so make sure you check it regularly to get the latest updates.   Here's our team:Lukasz Romaszewski Java & middleware specialist, 8 years experience in architecting, developing and supporting enterprise solutions based on J2EE and Oracle Database technology. At Oracle from April 2008, working as an IMC Migration Consultant in Oracle Partner Hub in Cracow, Poland. Helping Oracle Partners in migrating their solutions to the latest Oracle Fusion Middleware stack, running hands-on migration workshops and seminars across Europe. Experienced in the following areas and products Oracle Weblogic Application Server 11gApplication Development Framework (ADF)Oracle SOA Suite 11gOracle Forms 6i, 10g and 11gOracle Database (PL/SQL, AQ, XML DB)Java EE 5.0 based architecture Murat Teksoz Oracle DB and DB options - Oracle Linux- Apex- Oracle Business intelligence specilist, 13 years experince in Database managment, Performans Tuning, Diagnosting ,Installation and Configurationg database, Database Security, High Avalibility and Disaster Recovery solutions. Working at Oracle IMC Istanbul from September 2008, delivering partner workshops and seminars in Europe and Central Asia. Experienced in the following areas and products Oracle 9i,10g,11g Database SolutionsOracle Partitioning, Total Recall Advantage compressingOracle High Avalability Solutions - Real Application ClusterOracle Disaster Recovery Solutions - Oracle DataguardOracle Grid ControlOracle LinuxOracle Business intelligence solutions - Oracle Bi 10g-11gMigration Tools (Sqldeveloper) - Migrate from SqlServer,Mysql,Sysbase,Db2 to Oracle DatabaseOracle APEX (Application Express Tool) Vadim Melnikov Oracle Database specialist with DB Options, Linux and virtualization skills. Vadim has more than 8 years experience with Oracle products and is now working as Database consultant in Oracle IMC Moscow as employee of FORS Development center, Russian Oracle Platinum partner. Helping Oracle Partners to migrate solutions to Oracle from other platforms and adopt new oracle technologies, running workshops and seminars. Experienced in the following areas and products Oracle Database 9i,10g,11g Database Solutions (SQL, PL/SQL, Installing, Configuring, Performance Tuning, Diagnosting, Database management)Oracle DB options (Partitioning, Total Recall, Advanced compression)Oracle Enterprise ManagerOracle Enterprise LinuxOracle VM 2 for x86Migration to Oracle DatabaseOracle Application Express Gokhan Gungor Java (J2EE) Lead Developer and Architect. Designed and Developed Web Applications, Middleware Systems/Services, Desktop Applications and Back-end Tools/Services using Java, WebLogic Server, JBoss and Open Source Frameworks. Joined Oracle in 2010 as Fussion middleware consultant in Istanbul IMC , responsible for running migration and adoption workshops and seminars covering Java technology, ADF, WebLogic and SOA and providing technical consultancy for migration projects. Experienced in the following areas and products Oracle WebLogic ServerApplication Development Framework (ADF)JDeveloperJava EE (EJB, JMS, Servlet, JSP, JSF, JavaMail, JTA, JAAS, JSTL, JAXB)Java SE (JavaBeans, JDBC, XML, XSL, RMI, JNDI, JAXP)Oracle Database 10g,11g Dmitry Nefedkin Oracle Middleware & Java specialist, 7+ years experience in developing, designing enterprise solutions based on Oracle Database and Middleware, developing Oracle e-Business Suite customizations, designing integration architecture within the companies . Joined Oracle team in October 2010 as IMC FMW Consultant in Oracle Alliances & Channels in Moscow, Russia. Experienced in the following areas and products Oracle Weblogic Application Server 11gOracle Service Bus 11gOracle SOA Suite 10g (BPEL PM, ESB, OWSM)Oracle Application Server 10gOracle Forms 6i and 9iOracle BI PublisherOracle ADF 10gOracle Database (SQL tuning, PL/SQL, AQ, Streams)Java EE 5 developmentCheck out our web site as well: Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} http://www.oracle.com/partners/en/most-popular-resources/027930

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • Central Authentication For Windows, Linux, Network Devices

    - by mojah
    I'm trying to find a way to centralize user management & authentication for a large collection of Windows & Linux Servers, including network devices (Cisco, HP, Juniper). Options include RADIUS/LDAP/TACACS/... Idea is to keep track with staff changes, and access towards these devices. Preferably a system that is compatible with both Linux, Windows & those network devices. Seems like Windows is the most stubborn of them all, for Linux & Network equipment it's easier to implement a solution (using PAM.D for instance). Should we look for an Active Directory/Domain Controller solution for Windows? Fun sidenote; we also manage client systems, that are often already in a domain. Trust-relationships between Domain Controllers isn't always an option for us (due to client security restrictions). I'd love to hear fresh ideas on how to implement such a centralized authentication "portal" for those systems.

    Read the article

  • Having trouble mapping Sharepoint document library as a Network Place

    - by Sdmfj
    I am using Office 365, Sharepoint Online 2013. Using Internet Explorer these are the steps I have taken: ticked the keep me signed in on the portal.microsoftonline.com page. It redirects me to Godaddy login page because Office 365 was purchased through them. I have added these sites to trusted sites (as well as every page in the process) and chose auto logon in Internet explorer. Once on the document library I open as explorer and copy the address as text. I go to My Computer and right click to add a network place and paste in the document library address. It successfully adds the library as a network place 30% of the time. I can do this same process 3 times in a row and it will fail the first 2 times and then succeeds. It works for a little while and then I get an error that the DNS cannot be found. I need multiple users in our organization to be able to access this document library as if it was a mapped network drive on our local network. Is there an easier way to do this? I may just sync using the One Drive app but thought that direct access to the files without worrying about users keeping their files synced.

    Read the article

  • Office 365 domain federation conversion failed

    - by Matt Bear
    We're doing things backwards, we have an established o365 domain, with 400+ users, and are just now deploying local AD, and ADFS for SSO. Last night, after configuring my servers, I ran the powershell command convert-MSOLdomaintofederated to convert the xxx.com vanity domain to federated, it errored out with an unspecified error(Microsoft ADFS support said the error has to do with the default password settings being changed.) And when I run convert-MSOLdomaintostandard, it comes back with the domain is already standard. Also in the o365 portal it shows the domain as standard, however it is trying to process login attempts as if it were a federated domain. I've spent 5 hours total on the phone with Microsoft, and it has been escalated to their engineering department for resolution, sometime within the next few days... I need it yesterday. From what we can gather, the conversion process started, error out, changed some of the internal configurations to federated, but left the description as standard.(if that makes since). So its in a weird limbo, where its in both modes but neither at the same time. Currently, the only way to fix it is to remove the vanity domain, and re-add it. I need a way to dissociate the user accounts from xxx.com domain to allow its removal. Removal of all the users themselves is not an option.

    Read the article

  • Problem running “Central Administration” website after windows update at Windows 2003 Server Standar

    - by Magdy Roshdy
    I was have WSS 2.0 and then I upgraded to WSS 3.0 and the old instalation database was SQL 2000, now I have another SQL Server instance called:server_name\MICROSOFT##SSEE . After upgrade every thing works fine and our team started to use the portal and we sent lot of documents and make lot of activities on it. The problem started after installing Windows updates the website suddenly stopped and giving me an error "Cannot connect to the configuration database" If I tried to open SharePoint Products and Technologies Configuration Wizard it is gives me a strange error says: "An exception of type Microsoft.SharePoint.PostSetupConfiguration.PostSetupConfigurationTaskException was thrown. Additional exception information: SharePoint Products and Technologies cannot be configured. The current installation mode does not support SKU to SKU upgrades because there exists an older version of Windows SharePoint Services that must be upgraded first " At this post:http://stackoverflow.com/questions/114398/iis-error-cannot-connect-to-the-configuration-database/249494#249494 the guy of the second answer have the same problem and he suggested a solution but I don't understand well. I tried as he suggested to make the identity of the app pool of the SharePoint web site as "IWAM_server_name " after that the error changed as he said and I web site give me "Server Application Unavailable " and when checked the Event Viewer at the server I found that ASP.NET 2.0 give this exception: "Could not load file or assembly 'System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Access is denied ." and I don't know how to solve this problem. I'm really want to make my web site working because our team really need these documents and its stuff. I hope I will find some one to help me.

    Read the article

  • Acer Aspire Touchscreen Not Responding

    - by Jerry
    I have an Acer Aspire z3101 all-in-one Touch screen running Windows 7 Home Premium. I am unable to get the screen to respond to my touch. I have checked the system and all drivers are ok according to comp. Using the settings to setup the pen and touch and to calibrate does not work. My touch brings no response at all. I had reinstalled the applications - Windows Touch Pack and Acer Touch Portal which didn't help. Today I done a complete reinstall returning comp back to factory settings and still there is no response. At first I thought that perhaps a driver or file was missing but now doubt that due to the reinstall not helping. Have you any idea what could be causing this problem? I am now at the stage of pulling my hair out and haven't had too much help from Acer. I have had the computer for just over a year now and so it is no longer under guarantee and this has me very worried due to my being unemployed. Any help is much appreciated. Thank you.

    Read the article

  • What switch should we use for PCoIP?

    - by Jay R.
    We have a small lab space that seats 10 people and has 20 machines. Each machine is set to 1920x1200 resolution because the user apps are best used at that resolution. Currently the machines are all located close enough to montors that a DisplayPort cable will reach, but the pending lab remodel positions them around 80 feet or more away in racks. Our proposed solution is to use PCoIP. We purchased 10 PCoIP portals and 20 PCoIP host cards. We plan to set up a dedicated network to handle just the PCoIP traffic. After testing just one portal and one host card with a cheap 1G switch from a local office supply store, we were left with less than good impressions about the usefulness in our lab. The framerates were not spectacular and the mouse seemed jerky. Our concern is that we can't get away with the cheap 1G stuff from the store because adding more machines to the switch will just make the user experience worse. What switch would be recommended to best support our PCoIP situation? We will need to plug in at least 30 cables based on just those machines. Is there a particular feature to search for that makes a difference? Is there a switch that works best with PCoIP? Added Info: The reporting webapp for the host card shows maximum bandwidth usage to be 220000 kbps. The average appears to be around 180000 kbps. The reverse direction is much lower, like 15000 kbps.

    Read the article

  • Questions about NGINX limit_req_zone

    - by Meteor
    I got a problem with NGINX limit_req_zone. Anyone can help? The problem is that, I want to limit user access to some specific URL, for example: /forum.php?mod=forumdisplay? /forum.php?mod=viewthread&*** But, I do want to add an exception for below URL, /forum.php?mod=image&* Below is the location section of my configuration, the problem is that, for URL started with /forum.php?mod=image&*, the limitation is still applied. Any body can help? location ~*^/forum.php?mod=image$ { root /web/www; fastcgi_pass unix:/tmp/nginx.socket; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } location ~*^/(home|forum|portal).php$ { root /web/www; limit_conn addr 5; limit_req zone=refresh burst=5 nodelay; fastcgi_pass unix:/tmp/nginx.socket; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } location ~ \.php$ { root /web/www; fastcgi_pass unix:/tmp/nginx.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • Spring??Java EE 6?! ???????????????·????????Java Developers Workshop 2012 Summer????

    - by ???02
    ?Java EE 6??????????????????????????????????????/?????????????????????????Spring Framework?????????????????????????????????????????????????????????????????????????????? “Spring to Java EE 6”????·????????????????????Java EE 6????????????????????IT???????????????Java??????????????·???????????????8???????Java Developers Workshop 2012 Summer????????????????Best Practices for Migrating Spring to Java EE 6???????????????(???) ???????????????Spring Framework??Java EE 6???? ?????IT??????????????????·?????????????????? “Spring to Java EE 6”?????????????????????????????????????????????????????????????????????? ???????????Java?????????????????????????????????Java??????????????????????????????????????????????????100??Java??????????????????????????????????? ???????Java?????????????????1??????????·??????????????Java?????????????????????????????Java???????????????????·??????????????????·????????????Java????????????????????????????Java??????????????????????????????? ??????????????????????????????????????????????????Spring??Java EE 6??????????·???????Spring????????????????????Java EE 6?????????????????????????????????????????????????????Java EE 6??????????????????? 8???????Java Developers Workshop 2012 Summer??????????Java??????????????????????????????????Best Practices for Migrating Spring to Java EE 6??????????“Spring to Java EE 6”????????????????????????????“Spring to Java EE 6”??????????????????? Java??????????“Spring to Java EE 6”????·?????? ?????Spring??Java EE????????????????????????????????????????? Spring????????·?????????????J2EE Design and Development?????J2EE 1.4??????????????????????????Spring ?????????????????????????????? ???Spring???8?????????????????????????????????????Java EE?????????????????????????????????????????????????? ???????Java EE??????????????????????????Java EE??????????????????????????????????????????????????????????????????????????????????Spring????????Java EE 6???????????????????????????? ???????????????????????????????Java EE?Spring?????????????????????????????????????????Spring??Java EE 6???????????????????????????????? ?????“??”Spring??Java EE 6?????????? ???????? “??”?Spring??Java EE 6?????????????????????????????????? Spring????????????????????????????5?6???????????Spring?????????????????????????????????????????????????????????????????????? ????Spring?????????????????????????????????????????????????????????????????????????????????????????????????????? ???Spring????????????5??10?????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????·????????????????Java EE 6??????????????1????????????? Java EE 6???????????3????????????????·?????????????????????????????????????????????????????????????Java EE 6???????????????????????? ?????1???????????????Spring????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????? Java EE???????????? ?????????????????????????Java EE?????????????????????????????Java EE??????????Spring?????????????????? ????Java EE?????? ??1????????Java EE?????????????????????Java EE?????????????????????????????????????????????????????????????? Java Developers Workshop 2012 Summer??????Java EE 6???????????? ????GlassFish?????????????????Java EE 6???WAR?????????????????????????????????????????????????????????????·????????????????????100KB?????????????????????????????????????????? Spring??????2????????Java EE???Dependency Injection(DI)??????????????DI??Spring?????????????????????????????Java EE 6???Context Dependency Injection(CDI)???????????DI?????????????Java EE 6????????????????CDI??????????????????? 3?????????????Aspect Oriented Programming(AOP)???????????????????????????????AOP?????????????????????????????????????????????????????????????AOP????????????????????????????????AOP??????????????????????????????????Java EE 6???Interceptor???????Spring AOP??????????????????????????????????????????? 4????????Java EE??????????IDE????????????????????EJB?????????????????????????????????????????Java EE 6?????????????????????????NetBeans?Eclipse?????????????IDE????????????????????????????????????????????????????????????????????????????????????????? ???????Spring?Java EE????????????? ??????????????????Spring??????????????????Java EE 6??????????? ????Java EE???????????(integration testing)?????????????????????Arquillian???????????????????????·????????Arqullian??????????????????????????????????????????????????????????????GlassFish?JBoss???????????·???????????JUnit??????·????????????????????????????? Spring??Java EE?????????5??????? Java EE 6??Spring?????????????????????????????Spring??Java EE 6???????????????????????????????????Spring????????????????????????????? ???????????“????·??·????”????????????????????????????????(????????·??)????????????Java EE 6?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ?????1-Spring?????????? ????????????????????Spring?????????????????XML?????????????????????O/R?????·?????????????????????Web????????·???????????????????????? ???????????????????????Spring??????????????????????Spring????????????????????????·???????????????????Spring 3.x??JPA?JSF????Java EE 6??????????????Spring???????????????????????Java EE 6??????????????????????????????????????????? ?????2-Spring?????????????????? ????????????????????????????Spring???????????????????????????????????????????????????Spring???????????????·???????????????????? ????????????????????????????????????“???(??)”???????Web MVC???Kodo??????????????JSF?JPA????????????????????????????????????????????????????????????????????????????????? ?????????????????JSF?????????????Playframework????????????????????????Java EE 6??????????????????????????????????????????????????????????Spring???API??????????????????? ?????3:Spring?????????Java EE 6???????????????? ??????????????????????????Spring??????Java EE????????·??????????????? ????Spring?????????????????????????·???????WAR?????????????????????????????Spring??????????????Spring Beans???????????????????????? ???Java EE??????????????????????????????????????????Java EE 6????????????????·???????CDI????EJB??????????????????????????WAR???????CDI Bean?Session Bean????????????????CDI/EJB????????????????????????????EJB??????Spring?????????????????????? Spring??????Java EE????????·?????????????????????????????????? ???????Spring??????????Java EE???????????WAR??????????????????????????Java EE?????????Spring??????????????????????Java EE??????????? ?????????????????????Spring?Java EE???????????????????????????????????????????????????????????????(???)??????????????????????3????????????????? ????????????Spring?Java EE?????????????? Spring????????????????????????EJB????????CDI??????????????????????????????????????????????????????????CDI????????@Inject????????????????????????? ?????4:Spring??????????? 4??????????????Spring??????????????????????3???????Spring?Java EE??????????? ????3???Java EE 6?????????????Spring???????????????????Java EE 6??????????????????????Java EE 6??????????????????????????????? ?????????Spring JDBC???????????????????????????????????????Spring JDBC????????????????????????????????????? ??????????????·?????????????????????????????????????????????Spring??????????EJB?DAO(Data Access Object)???????????Spring???????????????????????Java EE?????????(???????????????·???????????)? Java EE 6???EJB?JPA??????????????????EJB?????????????????????????JPA????????????????????????Java EE???????????????EJB???????????????????????Java EE 6???EJB????????????????????Spring DAO?Java EE????????????????????????????EJB???????????????Java EE 6?????EJB?????????????????????? Spring?JDBC Templates??????????????“???”?????????O/R????????????????JDBC Templates???????????????????????????????????????O/R???????????????????? ????????????????????JDBC Template??????????????????????????????????????????????????????????????CDI?????????????JDBC Template??????Spring???????????????????????????????????????JDBC Template?Spring????????????????????????????????Template Producer???????CDI??JDBC Template???????????? ?????5:Spring????????? ???????????Spring?????????????????????????????Spring??????????????????Java EE 6?????????????????????pom.xml?????????????????????????????????????????????·????????????WAR????????????????????? ?2????????????? ???Spring??Java EE 6????????????????????????????????????????????????????????????????????????? ????????????????2???????1????????????????????????????????????????????????????????????????????????????????Spring????????????????????????????????????????????????????????????????????????????????????????????????????10????????????????????????????????????????Java EE?????10??????????????·?????????????????????????????????????????? ??????????????????????????????????Spring???????????????????????????????Java EE?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????“?”???????? Spring????????????·????????????????????????????????????JCP???????·?????????????????????????????????????????????????????????????????????????????????????????????????Hibernate?JPA????????????Spring???????????????? ?????????????????????!?Java Developers Workshop 2012 Summer???????????????!! ???Java Developers Workshop 2012 Summer??????????????Oracle Technology Network?????·???? ????????????????????????????????????????????????OTN??????????????? ? ????????? ? Oracle Technology Network?????·???? ???? ?Java Developers Workshop 2012 Summer?– Java EE ?????? – ? ??????????oracle.com???????????????oracle.com????????????????????????????(???????)????????????????????????????????????????????????????????????????????

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • EJB3 Transaction Propogation

    - by Matt S.
    I have a stateless bean something like: @Stateless public class MyStatelessBean implements MyStatelessLocal, MyStatelessRemote { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.SUPPORTED) public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.process(obj); } } @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } The typically usage then is the client would call processObjects(...), which doesn't actually interact with the entity manager. It does what it needs to do and calls process(...) individually for each object to process. The duration of process(...) is relatively short, but processObjects(...) could take a very long time to run through everything. Therefore I don't want it to maintain an open transaction. I do need the individual process(...) operations to operate within their own transaction. This should be a new transaction for every call. Lastly I'd like to keep the option open for the client to call process(...) directly. I've tried a number of different transaction types: never, not supported, supported (on processObjects) and required, requires new (on process) but I get TransactionRequiredException every time merge() is called. I've been able to make it work by splitting up the methods into two different beans: @Stateless @TransationAttribute(TransactionAttributeType.NOT_SUPPORTED) public class MyStatelessBean1 implements MyStatelessLocal1, MyStatelessRemote1 { @EJB private MyStatelessBean2 myBean2; public void processObjects(List<Object> objs) { // this method just processes the data; no need for a transaction for(Object obj : objs) { this.myBean2.process(obj); } } } @Stateless public class MyStatelessBean2 implements MyStatelessLocal2, MyStatelessRemote2 { @PersistenceContext(unitName="myPC") private EntityManager mgr; @TransationAttribute(TransactionAttributeType.REQUIRES_NEW) public void process(Object obj) { // do some work with obj that must be in the scope of a transaction this.mgr.merge(obj); // ... this.mgr.merge(obj); // ... this.mgr.flush(); } } but I'm still curious if it's possible to accomplish this in one class. It looks to me like the transaction manager only operates at the bean level, even when individual methods are given more specific annotations. So if I mark one method in a way to prevent the transaction from starting calling other methods within that same instance will also not create a transaction, no matter how they're marked? I'm using JBoss Application Server 4.2.1.GA, but non-specific answers are welcome / preferred.

    Read the article

  • How to packing Resources in Maven Project?

    - by Rehman
    I am looking for suggestion in putting image file maven web project. One of classes under src/main/java need to use an image file. My problem is, if i put image file under src/main/webapp/images then application server cannot find that path on runtime(where myeclipse can ) because war archive do not have specified path "src/main/webapp/images". My question is where should i put the image file that my project can find it without prompting any error or exception. I am using Java Web Application with Mavenized falvour MyEclipse 10 Application Server: JBoss 6 Currently i do have following dir structure Project Directory Structure -src/main/java -- classes -src/main/resources -- applicationContext.xml -src/test/java -- test classes -src/test/resources (nothing) -src -- main --- webapp --WEB-INF --Images --Css --Meta-INF -target -pom.xml And pom.xml's build snippet is as follows <build> <sourceDirectory>${basedir}/src/main/java</sourceDirectory> <outputDirectory>${basedir}/target/${project.artifactId}/classes</outputDirectory> <resources> <resource> <directory>${basedir}/src/main/resources</directory> <excludes> <exclude>**/*.java</exclude> </excludes> </resource> </resources> <testResources> <testResource> <directory>${basedir}/src/test/resources</directory> </testResource> </testResources> <finalName>${project.artifactId}</finalName> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${jdk.version}</source> <target>${jdk.version}</target> <optimize>true</optimize> <useProjectReferences>true</useProjectReferences> </configuration> </plugin> <plugin> <artifactId>maven-war-plugin</artifactId> <version>2.1.1</version> <configuration> <warSourceDirectory>${basedir}/src/main/webapp</warSourceDirectory> <webResources> <resource> <directory>${basedir}/src/main/resources</directory> </resource> </webResources> </configuration> </plugin> </plugins> </build> Thanks in advance

    Read the article

  • How to Configure SSL over Database in Spring?

    - by Sameer Malhotra
    Hi, I want to add SSL security in the Database layer. I am using Struts2.1.6, Spring 2.5, JBOSS 5.0 and Informix 11.5. Any idea how to do this? I have researched through a lot on the internet but could not find any solution. Please suggest! Here is my datasource and entity manager beans which is working perfect without SSL: <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="database" value="INFORMIX" /> <property name="showSql" value="true" /> </bean> </property> </bean> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="com.informix.jdbc.IfxDriver" /> <property name="url" value="jdbc:informix-sqli://SERVER_NAME:9088/DB_NAME:INFORMIXSERVER=SERVER_NAME;DELIMIDENT=y;" /> <property name="username" value="username" /> <property name="password" value="password" /> <property name="minIdle" value="2" /> </bean> <bean class="org.springframework.beans.factory.config.MethodInvokingFactoryBean" lazy-init="false"> <property name="targetObject" ref="dataSource" /> <property name="targetMethod" value="addConnectionProperty" /> <property name="arguments"> <list> <value>characterEncoding</value> <value>UTF-8</value> </list> </property> </bean> <bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate" scope="prototype"> <property name="dataSource" ref="dataSource" /> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <tx:annotation-driven transaction-manager="transactionManager" />

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >