Search Results

Search found 615 results on 25 pages for 'sean farley'.

Page 23/25 | < Previous Page | 19 20 21 22 23 24 25  | Next Page >

  • php / mysql pagination

    - by arrgggg
    Hi, I have a table with 58 records in mysql database. I was able to connect to my database and retrive all records and made 5 pages with links to view each pages using php script. webpage will look like this: name number john 1232343456 tony 9878768544 jack 3454562345 joe 1232343456 jane 2343454567 andy 2344560987 marcy 9873459876 sean 8374623534 mark 9898787675 nancy 8374650493 1 2 3 4 5 that's the first page of 58 records and those 5 numbers at bottom are links to each page that will display next 10 records. I got all that. but what I want to do is display the links in this way: 1-10 11-20 21-30 31-40 41-50 51-58 note: since i have 58 records, last link will display upto 58, instead of 60. Since I used the loop to create this link, depending on how many records i have, the link will change according to the number of records in my table. How can i do this? Thanks.

    Read the article

  • Jquery: How do I fire/play a sound file when I want?

    - by Sotkra
    I have some code that basically inflates a 'balloon' through 15 or so stages and then makes it pop at the 16th stage. (yes, images are changed). What I'm wondering now is if it's possible to use Jquery to play a sound file whenever I reach that 16th stage (or when whatever var reaches whatever value) - in other words...when I want. I've found several jquery sound plugins but they all create this player which I must then click for it to play the file. How do I skip that 'click' part so that the sound is just...directly/automatically played? http://www.sean-o.com/jquery/jmp3/ http://www.happyworm.com/jquery/jplayer/ All help is appreciated G.Campos

    Read the article

  • Running sites on "localhost" is extremeley slow.

    - by seanxe
    Hello all, Having real trouble using my local host to test sites. It runs extremely slow! Sometimes up to a minute to load a page. I'm using firefox and the sites i'm testing run fine on other developers in my office local machines/ on the production server. I've gone through the normal things :- Disabled IPv6 Not running in debug mood Put the site in the highest app pool (High Isolated) on IIS 6. Taking of firewalls etc. The problem only seems to occur when hitting pages which contain some form of .net code in the code-behind. Appreciate that this a little bit of a vague topic / stab in the dark but would appreciate any sort of advice - it's horrible waiting a minute each refresh to try out a change! Cheers, Sean.

    Read the article

  • OpenGL "out of memory" on glReadPixels()

    - by spurserh
    Hello, I am running into an "out of memory" error from OpenGL on glReadPixels() under low-memory conditions. I am writing a plug-in to a program that has a robust heap mechanism for such situations, but I have no idea whether or how OpenGL could be made to use it for application memory management. The notion that this is even possible came to my attention through this [albeit dated] thread on a similar issue under Mac OS X: http://lists.apple.com/archives/Mac-opengl/2001/Sep/msg00042.html I am using Windows XP, and have seen it on multiple NVidia cards. I am also interested in any work-arounds I might be able to relay to users (the thread mentions "increasing virtual memory"). Thanks, Sean

    Read the article

  • Automatically registering "commands" for a command line program in python

    - by seandavi
    I would like to develop a command-line program that can process and give "help" for subcommands. To be concrete, say I have a single script called "cgent" and I would like to have subcommands "abc", "def", and "xyz" execute and accept the rest of the sys.args for processing by optparse. cgent abc [options] cgent help abc .... All of this is straightforward if I hard-code the subcommand names. However, I would like to be able to continue to add subcommands by adding a class or module (?). This is similar to the idea that is used by web frameworks for adding controllers, for example. I have tried digging through pylons to see if I can recreate what is done there, but I have not unravelled the logic. Any suggestions on how to do this? Thanks, Sean

    Read the article

  • Java Space on Parleys

    - by Yolande Poirier
    Now available! A great selection of JavaOne 2010 and JVM Language Summit 2010 sessions as well as Oracle Technology Network TechCasts on the new Java Space on Parleys website. Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices. The current selection is a well of expertise from top Java luminaries and Oracle experts. JavaOne 2010 sessions: ·        Best practices for signing code by Sean Mullan   ·        Building software using rich client platforms by Rickard Thulin ·        Developing beyond the component libraries by Ryan Cuprak ·        Java API for keyhole markup language by Florian Bachmann ·        Avoiding common user experience anti-patterns by Burk Hufnagel ·        Accelerating Java workloads via GPUs by Gary Frost JVM Languages Summit 2010 sessions: ·      Mixed language project compilation in Eclipse by Andy Clement  ·      Gathering the threads by John Rose  ·      LINQ: language features for concurrency by Neal Gafter  ·      Improvements in OpenJDK useful for JVM languages by Eric Caspole  ·      Symmetric Multilanguage - VM Architecture by Oleg Pliss  Special interviews with Oracle experts on product innovations: ·      Ludovic Champenois, Java EE architect on Glassfish 3.1 and Java EE. ·      John Jullion-Ceccarelli and Martin Ryzl on NetBeans IDE 6.9 You can chose to listen to a section of talks using the agenda view and search for related content while watching a presentation.  Enjoy the Java content and vote on it! 

    Read the article

  • Motores tecnológicos para el crecimiento económico

    - by Fabian Gradolph
    Oracle ha participado hoy en el IV Curso de Verano AMETIC-UPM. La presentación a corrido a cargo de José Manuel Peláez, director de Preventa de Tecnología de Oracle Ibérica, quien ha hecho una completa revisión de cuáles son los impulsores actuales de la innvovación y del desarrollo en el entorno de las tecnologías, haciendo honor al título del curso:  Las TIC en el nuevo entorno socio-económico: Motores de la recuperación económica y el empleo. Peláez, en su presentación "De la tecnología a la innovación: impulsores de la actividad económica",  ha comenzado destacando la importancia estratégica que hoy en día tienen las tecnologías para cualquier modelo de negocio. No se trata ya de hacer frente a la crisis económica, que también, sino sobre todo de hacer frente a los desafíos presentes y futuros de las empresas. En este sentido, es esencial hacer frente a un reto: el alto coste que tiene para las organizaciones la complejidad de sus sistemas tecnológicos. Hay un ejemplo que Oracle utiliza con frecuencia. Si un coche se comprase del mismo modo en que las empresas adquieren los sistemas de información, compraríamos por un lado la carrocería, por otro lado el motor, las ventanas, el cambio de marchas, etc... y luego pasaríamos un tiempo muy interesante montándolo todo en casa. La pregunta clave es: ¿por qué no adquirir los sistemas de información ya preparados para funcionar, al igual que compramos un coche?. El sector TI, con Oracle a la cabeza, está dando uina respuesta adecuada con los sistemas de ingenería conjunta. Se trata de sistemas de hardware y software diseñados y concebidos de forma integrada que reducen considerablemente el tiempo necesario para la implementación, los costes de integración y los costes de energía y mantenimiento. La clave de esta forma de adquirir la tecnología, según ha explicado Peláez, es que al reducir la complejidad y los costes asociados, se están liberando recursos que pueden dedicarse a otras necesidades. Según los datos de Gartner, de la cantidad de recursos invertidos por las empresas en TI, el 63% se dedica a tareas de puro funcionamiento, es decir, a mantener el negocio en marcha. La parte de presupuesto que se dedica a crecimiento del negocio es el 21%, mientras que sólo un 16% se dedica a transformación, es decir, a innovación. Sólo mediante la utilización de tecnologías más eficientes -como los sistemas de ingeniería conjunta-, que lleven aparejados menores costes, es viable reducir ese 63% y dedicar más recursos al crecimiento y la innovación. Ahora bien, una vez liberados los recursos económicos, ¿hacia dónde habría que dirigir las inversiones en tecnología?. José Manuel Peláez ha destacado algunas áreas esenciales como son Big Data, Cloud Computing, los retos de la movilidad y la necesidad de mejorar la experiencia de los clientes, entre otros. Cada uno de estos aspectos lleva aparejados nuevos retos empresariales, pero sólo las empresas que sean capaces de integrarlos en su ADN e incorporarlos al corazón de su estrategia de negocio, podrán diferenciarse en el panorama competitivo del siglo XXI. Desde estas páginas los iremos desgranando poco a poco.

    Read the article

  • Liberado Visual Studio 2010 Feature Pack y Power Tools

    - by carlone
      Estimados amig@s, El lunes recien pasado, fue liberado a la web (Release to Web RTW) el primer paquete de Visual Studio Feature Pack, el cual incluye Microsoft Visual Studio 2010 Visualization and Modeling Feature Pack y el Visual Studio 2010 Productivity Powertools. Realmente son dos herramientas muy poderosas de las cuales pueden obtener beneficio sobre todo en términos de eficiencia y productividad a la hora de estar haciendo programación. Visual Studio 2010 Productivity Powertools En palabras simples, esta herramienta es un conjunto de extensiones para visual studio (versión Professional para arriba), para mejorar la productividad de los programadores. Dentro de las muchas características que ofrece, podemos resaltar: Mejoras a la visualización dentro de los tabs en Visual Studio, lo cual incluye Scrollable Tabs Vertical Tabs Pinned Tabs Ventana de dialog para añadir referencias a un proyecto más eficiente y con capacidad de búsqueda Sobresaltar o marcar la línea actual (Muy útil para en monitores de alta resolución) HTML Copy, que permite copiar el código sin perder el formato Ctrl + Click Go to definition, esta es una característica que permite que al presionar la tecla control podamos ubicar hipervínculos para navegar a la definición de un símbolo. Alineación del código cuando hacemos asignación de valores Bien si de dan cuenta, son muchas las características que pueden aprovechar para mejorar su experiencia de programación dentro del entorno de Visual Studio 2010. Para descargar el instalador y obtener más información dirigirse al website: http://visualstudiogallery.msdn.microsoft.com/en-us/d0d33361-18e2-46c0-8ff2-4adea1e34fef   Microsoft Visualization and Modeling Feature Pack A través de esta herramienta podrás sacar mucho provecho para desarrollar modelos basados en UML. Nota: Este paquete esta limitado a suscriptores de MSDN y se puede descargar únicamente desde el sitio de descargas para suscriptores de MSDN. Es pre requisito tener la versión Visual Studio 2010 Ultimate para utilizar esta herramienta. Dentro de las muchas características ofrecidas, podemos resaltar: Generación de código a partir de diagramas UML Crear diagramas UML a partir del código (Ej: generar un diagrama de secuencia en base al código de su aplicación) Generar gráficas de dependencias de proyectos web Importar elementos de modelos de clase, secuencia y casos de uso de archivos XMI 2.1 Aunque esta limitada a la versión Ultimate, con esta herramienta los Arquitectos de Software y los jefes de desarrollo tendrán la posibilidad de poder controlar y diseñar mejor sus aplicaciones. Si desean ver la documentación, videos y descargar el pack pueden dirigirse a:  http://msdn.microsoft.com/en-us/vstudio/ff655021.aspx   Espero que estas herramientas les sean de mucho provecho!   Saludos, Carlos A. Lone Sigueme en Twitter @carloslonegt

    Read the article

  • Oracle 5th Annual Maintenance Summit - Orlando March 22-23, 2011

    - by stephen.slade(at)oracle.com
    It's not too late to register today or tomorrow for this exclusive 'Maintenance Professionals Only" event.  In 4 tracks, 27 customer and partner speakers will present case studies and success stories in these 'no-sell zone' sessions. The take-aways will be worth attending!This "2 in 1" event combines a Customer Showcase featuring Orlando Utilities Commission (OUC) and Maintenance Summit.  OUC - the local municipal utility providing residential, commercial, and industrial customers with clean, reliable, and affordable electric and water services - will open the event with their CIO as keynote speaker, and host tours of their fleet, facility, and power generation operations. Recognized as a green leader, OUC has been the most reliable power provider in Florida the past 9 years due, in large part, to the operational efficiencies of its plant and asset maintenance systems. This Summit will feature breakout session tracks for EBS, JD Edwards, PeopleSoft and Sustainability. Highlights include over 12 Oracle solution demo stations, over 25 interactive breakout sessions, pool-side networking reception with live band, partner exhibit pavilion and special appearance by Sean D. Tucker, Team Oracle Stunt-Pilot!  Dates:                   March 22-23, 2011 Location:             Orlando World Center Marriott, Orlando, Florida Evite:                     http://www.oracle.com/us/dm/h2fy11/65971-nafm10019768mpp191c003-oem-304204.html Highlights:          Keynotes, Oracle Expert Demo Stations, Interactive Breakout Sessions, Networking Reception, Partner Pavilion, Speakers Tracks:                 EBS, JDE, PSFT, Sustainability Tours:                  Orlando Utility Operations, Fleet and Facility Oracle Demo Stations:  Agile, AutoVue, Primavera, MOC/SSDM, Utilities, PIM, PDQ, UCM, On Demand, Business Accelerators, Facilities Work Management, EBS Enterprise Asset Management, PeopleSoft Maintenance Management, Technology, Hardware/Sun. Partner-Sponsors:   Viziya, Global PTM, MiPro, Asset Management Solutions, Venutureforth, Impac Services, EAM Master, LLC, Meridium

    Read the article

  • Mostrar Imagenes en ListView utilizando ImageList WinForms

    - by Jason Ulloa
    El día de hoy veremos como trabajar con los controles ListView e Imagelist de WindowsForms para poder leer y mostrar una serie de imágenes. Antes de ello debo decir que pueden existir otras formas de mostrar imagenes que solo requieren un control por ejemplo con un Gridview pero eso será en otro post, ahora nos centraremos en la forma de realizarlo con los controles antes mencionados. Lo primero que haremos será crear un nuevo proyecto de windows forms, en mi caso utilizando C#, luego agregaremos un Control ImageList. Este control será el que utilicemos para almacenar todas las imágenes una vez que las hemos leído. Si revisamos el control, veremos que tenemos la opción de agregar la imágenes mediante el diseñador, es decir podemos seleccionarlas manualmente o bien agregarlas mediante código que será lo que haremos. Lo segundo será agregar un control ListView al Formulario, este será el encargado de mostrar las imagenes, eso sí, por ahora será solo mostrarlas no tendrá otras funcionalidades. Ahora, vamos al codeBehind y en el Evento Load del form empezaremos a codificar: Lo primero será, crear una nueva variable derivando DirectoryInfo, a la cual le indicaremos la ruta de nuestra carpeta de imágenes. En nuestro ejemplo utilizamos Application.StartUpPath para indicarle que vamos a buscar en nuestro mismo proyecto (en carpeta debug por el momento). DirectoryInfo dir = new DirectoryInfo(Application.StartupPath + @"\images");   Una vez que hemos creado la referencia a la ruta, utilizaremos un for para obtener todas la imágenes que se encuentren dentro del Folder que indicamos, para luego agregarlas al imagelist y empezar a crear nuestra nueva colección. foreach (FileInfo file in dir.GetFiles()) { try { this.imageList1.Images.Add(Image.FromFile(file.FullName)); }   catch { Console.WriteLine("No es un archivo de imagen"); } }   Una vez, que hemos llenado nuestro ImageList, entonces asignaremos al ListView sus propiedades, para definir la forma en que las imágenes se mostrarán. Un aspecto a tomar en cuenta acá será la propiedad ImageSize ya que está será la que definirá el tamaño que tendrán las imágenes en el ListView cuando sean mostradas. this.listView1.View = View.LargeIcon;   this.imageList1.ImageSize = new Size(120, 100);   this.listView1.LargeImageList = this.imageList1;   Por último y con ayuda de otro for vamos a recorrer cada uno de los elementos que ahora posee nuestro ImageList y lo agregaremos al ListView para mostrarlo for (int j = 0; j < this.imageList1.Images.Count; j++) { ListViewItem item = new ListViewItem();   item.ImageIndex = j;   this.listView1.Items.Add(item); } Como vemos, a pesar de que utilizamos dos controles distintos es realmente sencillo  mostrar la imagenes en el ListView al final el control ImageList, solo funciona como un “puente” que nos permite leer la imagenes para luego mostrarlas en otro control. Para terminar, los proyectos de ejemplo: C# VB

    Read the article

  • "Well, Swing took a bit of a beating this week..."

    - by Geertjan
    One unique aspect of the NetBeans community presence at JavaOne 2012 was its usage of large panels to highlight and discuss various aspects (e.g., Java EE, JavaFX, etc) of NetBeans IDE usage and tools. For example, here's a pic of one of the panels, taken by Markus Eisele: Above you see me, Sean Comerford from ESPN.com, Gerrick Bivins from Halliburton, Angelo D'Agnano and Ioannis Kostaras from the NATO Programming Center, and Çagatay Çivici from PrimeFaces. (And Tinu Awopetu was also on the panel but not in the picture!) On one of those panels a remark was made which has kind of stuck with me. Henry Arousell, a member of the "NetBeans Platform Discussion Panel", who works on accounting software in Sweden, together with Thomas Boqvist, who was also at JavaOne, said, a bit despondently, I thought, the following words at the start of the demo of his very professional looking accounting software: "Well, Swing took a bit of a beating this week..." That remark comes in the light of several JavaFX sessions held at JavaOne, together with many sessions from the web and mobile worlds making the argument that the browser, tablet, and mobile platforms are the future of all applications everywhere. However, then I had another look at the list of Duke's Choice Award winners: http://www.oracle.com/us/corporate/press/1854931 OK, there are 10 winners of the Duke's Choice Award this year. Three of them (JDuchess, London Java Community, Student Nokia Developer Group) are not awards for software, but for people or groups. So, that leaves seven awards. Three of them (Hadoop, Jelastic, and Parleys) are, in one way or another, some kind of web-oriented solution, though both Hadoop and Jelastic are broader than that, but are service-oriented solutions, relating to cloud technologies. That leaves four others: NATO air defense software, Liquid Robotics software, AgroSense software, and UNHCR Refugee Registration software. All these are, on the software level, Java desktop solutions that, on the UI layer, make use of Java Swing, together with LuciadMaps (NATO), GeoToolkit (AgroSense), and WorldWind (Liquid Robotics). (And, it went even further than that, i.e., this is not passive usage of Swing but active and motivated: Timon Veenstra, during his AgroSense demo, said "There are far more Swing applications out there than we seem to think. Web developers just make more noise." And, during his Liquid Robotics demo, James Gosling said: "Not everything can be done in HTML.") Seems to me that Java Swing was the enabler of more Duke's Choice Award winners this year than any other UI-oriented Java technology. Now, I'm not going to interpret that one way or another, since I've noticed that interpretations of facts tend to validate some underlying agenda. Take any fact anywhere and you can interpret it to prove whatever opinion you're already holding to be true. Therefore, no interpretation from me. Simply stating the fact that Swing, far from taking a beating during JavaOne 2012, was a more significant user interface enabler of Duke's Choice Award winners than any other Java user interface technology. That's not an interpretation, but a fact.

    Read the article

  • PASS: International Travels

    - by Bill Graziano
    Nihao!  One of the largest changes PASS is going through is the the expansion outside the US and Canada.  We’ve had international chapters and events in Europe since the early 2000’s.  But nothing on the scale we’re seeing now.  Since January 1st there have been 18 SQL Saturday events outside North America and 19 events in North America.  We hope to have three international SQLRally events outside the US in FY13 (budget willing).  I don’t know the exact percentage of chapters outside the US but it’s got be 50% or higher. We recently started an effort to remake the Board to better reflect the growing global face of PASS.  This involves assigning some Board seats to geographic regions.  You can ask questions about this in our feedback forum, participate in a Twitter chat or ask questions directly of Board members.  You can email me at if you’d like to ask a question directly.  We’re doing this very slowly and deliberately in hopes that a long communication cycle gives us a chance to address all the issues that our members will raise. After the Summit we passed a budget exception allocating an extra $20,000 for Board members to travel to local events.  I think it’s important for Board members to visit new areas and talk to more of our members.  I sent out an email asking where people had attended events outside their home city.  Here’s the list I got back: Albuquerque, Amsterdam, Boston, Brisbane, Chicago, Colorado Springs, Columbus, Dallas, Houston, Jacksonville, Las Vegas, London, Louisville, Minneapolis, New York City, Orange County, Orlando, Pensacola, Perth, Philadelphia, Phoenix, Redmond, Seattle, Silicon Valley, Sydney, Tampa Bay, Vancouver, Washington DC and Wellington.  (Disclaimer: Some of this travel was paid for by employers or Board members themselves.  Some of this travel may have been completed before the Summit.  That’s still one heck of a list!) The last SQL Saturday event this fiscal year is SQL Saturday Shanghai.  And that’s one I’m attending.  This is our first event in China and is being put on in cooperation with the local Microsoft office.  Hopefully this event will be the start of a growing community in China that includes chapters, SQL Saturdays and maybe a SQLRally or two in the future.  I’m excited to speak with people that are just starting down this path and watching this community grow. I encourage you to visit the PASS Global Growth site and read through the material there.  This is the biggest change we’ve made to our governance since I’ve been on the Board.  You need to understand how it affects you and how it affects the organization. And wish me luck on the 15 hour flight to Shanghai on Friday afternoon.  Rob Farley flies from Australia to the US for PASS events multiple times per year and I don’t know how he does it so often.  I think one of these is going to wipe me out.  (And Nihao (knee-how) is Chinese for Hello.)

    Read the article

  • PASS summit 2013. We do not remember days. We remember moments.

    - by Maria Zakourdaev
      "Business or pleasure?" barked the security officer in the Charlotte International Airport. "I’m not sure, sir," I whimpered, immediately losing all courage. "I'm here for the database technologies summit called PASS”. "Sounds boring. Definitely a business trip." Boring?! He couldn’t have been more wrong. If he only knew about the countless meetings throughout the year where I waved my hands at my great boss and explained again and again how fantastic this summit is and how much I learned last year. One by one, the drops of water began eating away at the stone. He finally approved of my trip just to stop me from torturing him. Time moves as slow as a turtle when you are waiting for something. Time runs as fast as a cheetah when you are there. PASS has come...and passed. It’s been an amazing week. Enormous sqlenergy has filled the city, filled the convention center and the surrounding pubs and restaurants. There were awesome speakers, great content, and the chance to meet most inspiring database professionals from all over the world. Some sessions were unforgettable. Imagine a fully packed room with more than 500 people in awed silence, catching each and every one of Paul Randall's words. His tremendous energy and deep knowledge were truly thrilling. No words can describe Rob Farley's unique presentation style, captivating and engaging the audience. When the precious session minutes were over, I could tell that the many random puzzle pieces of information that his listeners knew had been suddenly combined into a clear, cohesive picture. I was amazed as always by Paul White's great sense of humor and his phenomenal ability to explain complicated concepts in a simple way. The keynote by the brilliant Dr. DeWitt from Microsoft in front of the full summit audience of 5000 deeply listening people was genuinely breathtaking. The entire conference throughout offered excellent speakers who inspired me to absorb the knowledge and use it when I got home. To my great surprise, I found that there are other people in this world who like replication as much I do. During the Birds of a Feather Luncheon, SQL Server MVP Ted Krueger was writing a script for replicating the food to other tables. I learned many things at PASS, and not all of them were about SQL. After three summits, this time I finally got the knack of networking. I actually went up and spoke to people, and believe me, that was not easy for an introvert. But this is what the summit is all about. Sqlpeople. They are the ones who make it such an exciting experience. I will be looking forward to the next year. Till then I have my notes and new ideas. How long was the summit? Thousands of unforgettable moments.

    Read the article

  • Bullet Physics implementing custom MotionState class

    - by Arosboro
    I'm trying to make my engine's camera a kinematic rigid body that can collide into other rigid bodies. I've overridden the btMotionState class and implemented setKinematicPos which updates the motion state's tranform. I use the overridden class when creating my kinematic body, but the collision detection fails. I'm doing this for fun trying to add collision detection and physics to Sean O' Neil's Procedural Universe I referred to the bullet wiki on MotionStates for my CPhysicsMotionState class. If it helps I can add the code for the Planetary rigid bodies, but I didn't want to clutter the post. Here is my motion state class: class CPhysicsMotionState: public btMotionState { protected: // This is the transform with position and rotation of the camera CSRTTransform* m_srtTransform; btTransform m_btPos1; public: CPhysicsMotionState(const btTransform &initialpos, CSRTTransform* srtTransform) { m_srtTransform = srtTransform; m_btPos1 = initialpos; } virtual ~CPhysicsMotionState() { // TODO Auto-generated destructor stub } virtual void getWorldTransform(btTransform &worldTrans) const { worldTrans = m_btPos1; } void setKinematicPos(btQuaternion &rot, btVector3 &pos) { m_btPos1.setRotation(rot); m_btPos1.setOrigin(pos); } virtual void setWorldTransform(const btTransform &worldTrans) { btQuaternion rot = worldTrans.getRotation(); btVector3 pos = worldTrans.getOrigin(); m_srtTransform->m_qRotate = CQuaternion(rot.x(), rot.y(), rot.z(), rot.w()); m_srtTransform->SetPosition(CVector(pos.x(), pos.y(), pos.z())); m_btPos1 = worldTrans; } }; I add a rigid body for the camera: // Create rigid body for camera btCollisionShape* cameraShape = new btSphereShape(btScalar(5.0f)); btTransform startTransform; startTransform.setIdentity(); // forgot to add this line CVector vCamera = m_srtCamera.GetPosition(); startTransform.setOrigin(btVector3(vCamera.x, vCamera.y, vCamera.z)); m_msCamera = new CPhysicsMotionState(startTransform, &m_srtCamera); btScalar tMass(80.7f); bool isDynamic = (tMass != 0.f); btVector3 localInertia(0,0,0); if (isDynamic) cameraShape->calculateLocalInertia(tMass,localInertia); btRigidBody::btRigidBodyConstructionInfo rbInfo(tMass, m_msCamera, cameraShape, localInertia); m_rigidBody = new btRigidBody(rbInfo); m_rigidBody->setCollisionFlags(m_rigidBody->getCollisionFlags() | btCollisionObject::CF_KINEMATIC_OBJECT); m_rigidBody->setActivationState(DISABLE_DEACTIVATION); This is the code in Update() that runs each frame: CSRTTransform srtCamera = CCameraTask::GetPtr()->GetCamera(); Quaternion qRotate = srtCamera.m_qRotate; btQuaternion rot = btQuaternion(qRotate.x, qRotate.y, qRotate.z, qRotate.w); CVector vCamera = CCameraTask::GetPtr()->GetPosition(); btVector3 pos = btVector3(vCamera.x, vCamera.y, vCamera.z); CPhysicsMotionState* cameraMotionState = CCameraTask::GetPtr()->GetMotionState(); cameraMotionState->setKinematicPos(rot, pos);

    Read the article

  • Recent ImageMagick on CentOS 6.3

    - by organicveggie
    I'm having a terrible time trying to get a recent version of ImageMagick installed on a CentOS 6.3 x86_64 server. First, I downloaded the RPM from the ImageMagick site and tried to install it. That failed due to missing dependencies: error: Failed dependencies: libHalf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIex.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIlmImf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libImath.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libltdl.so.3()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 I have libtool-ltdl installed, but that includes libltdl.so.7, not libltdl.so.4. I have a similar problem with libHalf, libIex, libIlmImf and libImath. Typically, you can install OpenEXR to get those dependencies. Unfortunately, CentOS 6.3 includes OpenEXR 1.6.1, which includes ilmbase-devel 1.0.1. And that release of ilmbase-devel includes newer versions of those dependencies: libHalf.so.6 libIex.so.6 libIlmImf.so.6 libImath.so.6 I next tried following the instructions for installing ImageMagick from source. No luck there either. I get a build error: RPM build errors: File not found by glob: /home/sean/rpmbuild/BUILDROOT/ImageMagick-6.8.0-4.x86_64/usr/lib64/ImageMagick-6.8.0/modules-Q16/coders/djvu.* I even re-ran configure to explicitly exclude djvu and I still get the same error. At this point, I'm pulling my hair out. What's the easiest way to get a relatively recent version of ImageMagick ( 6.7) installed on CentOS 6.3? Does someone offer RPMs with dependencies somewhere?

    Read the article

  • Error when starting .Net-Application from ThinApp-Application

    - by user50209
    one of our customers uses SAP through VMWare ThinApp. In SAP there is a button that launches an .Net application from a server. When starting the .Net-application directly, there is no error. If the user tries to start the application by clicking the button in the ThinApp-Application, it displays the following errors: Microsoft Visual C++ Runtime Library R6034 An application has made an attempt to load the C runtime library incorrectly. Please contact the application's support team for more information. After clicking "OK" it displays: Microsoft Visual C++ Runtime Library Runtime Error! R6030 - CRT not initialized So, does the customer have to install some components into his ThinApp (if yes, which?) to get things working? Regards, inno ----- [EDIT] ----- @Sean: It's installed the following way: The .exe of the .Net-Application is on a mapped drive on a server. All clients have the requirements installed (.Net-framework for example) and start the .exe from the mapped drive. The ThinApp-Application tries to start this application and throws the mentioned exceptions. AFAIK there are no entry points for this application configured. What I should also mention is: The .Net-Application crashes during execution. That means, we have a debug mode implemented that shows what the application is doing. The application shows what it's doing and after some steps it crashes. The interesting point is: It's a .Net-application, not a C++ Application.

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Whoosh: PASS Board Year 1, Q4

    - by Denise McInerney
    "Whoosh". That's the sound the last quarter of 2012 made as it rushed by. My first year on the PASS Board is complete, and the last three months of it were probably the busiest. PASS Summit 2012 Much of October was devoted to preparing for Summit. Every Board  member, HQ staffer and dozens of volunteers were busy in the run-up to our flagship event. It takes a lot of work to put on the Summit. The community meetings,  first-timers program, keynotes, sessions and that fabulous Community Appreciation party are the result of many hours of preparation. Virtual Chapters at the Summit With a lot of help from Karla Landrum, Michelle Nalliah, Lana Montgomery and others at HQ the VCs had a good presence at Summit. We started the week with a VC leaders meeting. I shared some information about the activities and growth during the first part of the year.   From January - September 2012: The number of VCs increased from 14 to 20 VC membership  grew from 55,200 to 80,100 Total attendance at VC meetings increased from 1,480 to 2,198 Been part of PASS Global Growth with language-based VC- including Chinese, Spanish and Portuguese. We also heard from some VC leaders and volunteers. Ryan Adams (Performance VC) shared his tips for successful marketing of VC events. Amy Lewis (Business Intelligence VC) described how the BI chapter has expanded to support PASS' global growth by finding volunteers to organize events at times that are convenient for people in Europe and Australia. Felipe Ferreira (Portuguese language VC) described the experience of building a user group first in Brazil, then expanding to work with Portuguese-speaking data professionals around the world. Virtual Chapter leaders and volunteers were in evidence throughout Summit, beginning with the Welcome Reception. For the past several years VCs have had an organized presence at this event, signing up new members and advertising their meetings. Many VC leaders also spent time at the Community Zone. This new addition to the Summit proved to be a vibrant spot were new members and volunteers could network with others and find out how to start a chapter or host a SQL Saturday. Women In Technology 2012 was the 10th WIT Luncheon to be held at Summit. I was honored to be asked to be on the panel to discuss the topic "Where Have We Been and Where are We Going?" The PASS community has come a long way in our understanding of issues facing women in tech and our support of women in the organization. It was great to hear from panelists Stefanie Higgins and Kevin Kline who were there at the beginning as well as Kendra Little and Jen Stirrup who are part of the progress being made by women in our community today. Bylaw Changes The Board spent a good deal of time in 2012 discussing how to move our global growth initiatives forward. An important component of this is a proposed change to how the Board is elected with some seats representing geographic regions. At the end of December we voted on these proposed bylaw changes which have been published for review. The member review and feedback is open until February 8. I encourage all members to review these changes and send any feedback to [email protected]  In addition to reading the bylaws, I recommend reading Bill Graziano's blog post on the subject. Business Analytics Conference At Summit we announced a new event: the PASS Business Analytics Conference. The inaugural event will be April 10-12, 2013 in Chicago. The world of data is changing rapidly. More and more businesses want to extract value and insight from their data. Data professionals who provide these insights or enable others to do so are in demand. The BA Conference offers expert content on predictive analytics, data exploration and visualization, content delivery strategies and more. By holding this new event PASS is participating in important discussions happening in our industry, offering our members more educational value and reaching out to data professionals who are not currently part of our organization. New Year, New Portfolio In addition to my work with the Virtual Chapters I am also now responsible for the 24 Hours of PASS portfolio. Since the first 24HOP of 2013 is scheduled for January 30 we started the transition of the portfolio work from Rob Farley to me right after Summit. Work immediately started to secure speakers for the January event. We have also been evaluating webinar platforms that can be used for 24HOP as well as the Virtual Chapters. Next Up 24 Hours of PASS: Business Analytics Edition will be held on January 30. I'll be there and will moderate one or two sessions. The 24HOP topics are a sneak peek into the type of content that will be offered at the Business Analytics Conference. I hope to see some of you there. The Virtual Chapters have hit the ground running in 2013; many of them have events scheduled. The Application Development VC is getting restarted  and a new Business Analytics VC will be starting soon. Check out the lineup and join the VCs that interest you. And watch the Events page and Connector for announcements of upcoming meetings. At the end of January I will be attending a Board meeting in Seattle, and February 23 I will be at SQL Saturday #177 in Silicon Valley.

    Read the article

  • Custom ASP.Net MVC 2 ModelMetadataProvider for using custom view model attributes

    - by SeanMcAlinden
    There are a number of ways of implementing a pattern for using custom view model attributes, the following is similar to something I’m using at work which works pretty well. The classes I’m going to create are really simple: 1. Abstract base attribute 2. Custom ModelMetadata provider which will derive from the DataAnnotationsModelMetadataProvider   Base Attribute MetadataAttribute using System; using System.Web.Mvc; namespace Mvc2Templates.Attributes {     /// <summary>     /// Base class for custom MetadataAttributes.     /// </summary>     public abstract class MetadataAttribute : Attribute     {         /// <summary>         /// Method for processing custom attribute data.         /// </summary>         /// <param name="modelMetaData">A ModelMetaData instance.</param>         public abstract void Process(ModelMetadata modelMetaData);     } } As you can see, the class simple has one method – Process. Process accepts the ModelMetaData which will allow any derived custom attributes to set properties on the model meta data and add items to its AdditionalValues collection.   Custom Model Metadata Provider For a quick explanation of the Model Metadata and how it fits in to the MVC 2 framework, it is basically a set of properties that are usually set via attributes placed above properties on a view model, for example the ReadOnly and HiddenInput attributes. When EditorForModel, DisplayForModel or any of the other EditorFor/DisplayFor methods are called, the ModelMetadata information is used to determine how to display the properties. All of the information available within the model metadata is also available through ViewData.ModelMetadata. The following class derives from the DataAnnotationsModelMetadataProvider built into the mvc 2 framework. I’ve overridden the CreateMetadata method in order to process any custom attributes that may have been placed above a property in a view model.   CustomModelMetadataProvider using System; using System.Collections.Generic; using System.Linq; using System.Web.Mvc; using Mvc2Templates.Attributes; namespace Mvc2Templates.Providers {     public class CustomModelMetadataProvider : DataAnnotationsModelMetadataProvider     {         protected override ModelMetadata CreateMetadata(             IEnumerable<Attribute> attributes,             Type containerType,             Func<object> modelAccessor,             Type modelType,             string propertyName)         {             var modelMetadata = base.CreateMetadata(attributes, containerType, modelAccessor, modelType, propertyName);               attributes.OfType<MetadataAttribute>().ToList().ForEach(x => x.Process(modelMetadata));               return modelMetadata;         }     } } As you can see, once the model metadata is created through the base method, a check for any attributes deriving from our new abstract base attribute MetadataAttribute is made, the Process method is then called on any existing custom attributes with the model meta data for the property passed in.   Hooking it up The last thing you need to do to hook it up is set the new CustomModelMetadataProvider as the current ModelMetadataProvider, this is done within the Global.asax Application_Start method. Global.asax protected void Application_Start()         {             AreaRegistration.RegisterAllAreas();               RegisterRoutes(RouteTable.Routes);               ModelMetadataProviders.Current = new CustomModelMetadataProvider();         }   In my next post, I’m going to demonstrate a cool custom attribute that turns a textbox into an ajax driven AutoComplete text box. Hope this is useful. Kind Regards, Sean McAlinden.

    Read the article

  • Oracle Fusion Procurement Designed for User Productivity

    - by Applications User Experience
    Sean Rice, Manager, Applications User Experience Oracle Fusion Procurement Design Goals In Oracle Fusion Procurement, we set out to create a streamlined user experience based on the way users do their jobs. Oracle has spent hundreds of hours with customers to get to the heart of what users need to do their jobs. By designing a procurement application around user needs, Oracle has crafted a user experience that puts the tools that people need at their fingertips. In Oracle Fusion Procurement, the user experience is designed to provide the user with information that will drive navigation rather than requiring the user to find information. One of our design goals for Oracle Fusion Procurement was to reduce the number of screens and clicks that a user must go through to complete frequently performed tasks. The requisition process in Oracle Fusion Procurement (Figure 1) illustrates how we have streamlined workflows. Oracle Fusion Self-Service Procurement brings together billing metrics, descriptions of the order, justification for the order, a breakdown of the components of the order, and the amount—all in one place. Previous generations of procurement software required the user to navigate to several different pages to gather all of this information. With Oracle Fusion, everything is presented on one page. The result is that users can complete their tasks in less time. The focus is on completing the work, not finding the work. Figure 1. Creating a requisition in Oracle Fusion Self-Service Procurement is a consumer-like shopping experience. Will Oracle Fusion Procurement Increase Productivity? To answer this question, Oracle sought to model how two experts working head to head—one in an existing enterprise application and another in Oracle Fusion Procurement—would perform the same task. We compared Oracle Fusion designs to corresponding existing applications using the keystroke-level modeling (KLM) method. This method is based on years of research at universities such as Carnegie Mellon and research labs like Xerox Palo Alto Research Center. The KLM method breaks tasks into a sequence of operations and uses standardized models to evaluate all of the physical and cognitive actions that a person must take to complete a task: what a user would have to click, how long each click would take (not only the physical action of the click or typing of a letter, but also how long someone would have to think about the page when taking the action), and user interface changes that result from the click. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time is calculated. Task times from the model enable researchers to predict end-user productivity. For the study, we focused on modeling procurement business process task flows that were considered business or mission critical: high-frequency tasks and high-value tasks. The designs evaluated encompassed tasks that are currently performed by employees, professional buyers, suppliers, and sourcing professionals in advanced procurement applications. For each of these flows, we created detailed task scenarios that provided the context for each task, conducted task walk-throughs in both the Oracle Fusion design and the existing application, analyzed and documented the steps and actions required to complete each task, and applied standard time estimates to the operators in each task to estimate overall task completion times. The Results The KLM method predicted that the Oracle Fusion Procurement designs would result in productivity gains in each task, ranging from 13 percent to 38 percent, with an overall productivity gain of 22.5 percent. These performance gains can be attributed to a reduction in the number of clicks and screens needed to complete the tasks. For example, creating a requisition in Oracle Fusion Procurement takes a user through only two screens, while ordering the same item in a previous version requires six screens to complete the task. Modeling user productivity has resulted not only in advances in Oracle Fusion applications, but also in advances in other areas. We leveraged lessons learned from the KLM studies to establish products like Oracle E-Business Suite (EBS). New user experience features in EBS 12.1.3, such as navigational improvements to the main menu, a Google-type search using auto-suggest, embedded analytics, and an in-context list of values tool help to reduce clicks and improve efficiency. For more information about KLM, refer to the Measuring User Productivity blog.

    Read the article

  • Building a Mafia&hellip;TechFest Style

    - by David Hoerster
    It’s been a few months since I last blogged (not that I blog much to begin with), but things have been busy.  We all have a lot going on in our lives, but I’ve had one item that has taken up a surprising amount of time – Pittsburgh TechFest 2012.  After the event, I went through some minutes of the first meetings for TechFest, and I started to think about how it all came together.  I think what inspired me the most about TechFest was how people from various technical communities were able to come together and build and promote a common event.  As a result, I wanted to blog about this to show that people from different communities can work together to build something that benefits all communities.  (Hopefully I've got all my facts straight.)  TechFest started as an idea Eric Kepes and myself had when we were planning our next Pittsburgh Code Camp, probably in the summer of 2011.  Our Spring 2011 Code Camp was a little different because we had a great infusion of some folks from the Pittsburgh Agile group (especially with a few speakers from LeanDog).  The line-up was great, but we felt our audience wasn’t as broad as it should have been.  We thought it would be great to somehow attract other user groups around town and have a big, polyglot conference. We started contacting leaders from Pittsburgh’s various user groups.  Eric and I split up the ones that we knew about, and we just started making contacts.  Most of the people we started contacting never heard of us, nor we them.  But we all had one thing in common – we ran user groups who’s primary goal is educating our members to make them better at what they do. Amazingly, and I say this because I wasn’t sure what to expect, we started getting some interest from the various leaders.  One leader, Greg Akins, is, in my opinion, Pittsburgh’s poster boy for the polyglot programmer.  He’s helped us in the past with .NET Code Camps, is a Java developer (and leader in Pittsburgh’s Java User Group), works with Ruby and I’m sure a handful of other languages.  He helped make some e-introductions to other user group leaders, and the whole thing just started to snowball. Once we realized we had enough interest with the user group leaders, we decided to not have a Fall Code Camp and instead focus on this new entity. Flash-forward to October of 2011.  I set up a meeting, with the help of Jeremy Jarrell (Pittsburgh Agile leader) to hold a meeting with the leaders of many of Pittsburgh technical user groups.  We had representatives from 12 technical user groups (Python, JavaScript, Clojure, Ruby, PittAgile, jQuery, PHP, Perl, SQL, .NET, Java and PowerShell) – 14 people.  We likened it to a scene from a Godfather movie where the heads of all the families come together to make some deal.  As a result, the name “TechFest Mafia” was born and kind of stuck. Over the next 7 months or so, we had our starts and stops.  There were moments where I thought this event would not happen either because we wouldn’t have the right mix of topics (was I off there!), or enough people register (OK, I was wrong there, too!) or find an appropriate venue (hmm…wrong there, too) or find enough sponsors to help support the event (wow…not doing so well).  Overall, everything fell into place with a lot of hard work from Eric, Jen, Greg, Jeremy, Sean, Nicholas, Gina and probably a few others that I’m forgetting.  We also had a bit of luck, too.  But in the end, the passion that we had to put together an event that was really about making ourselves better at what we do really paid off. I’ve never been more excited about a project coming together than I have been with Pittsburgh TechFest 2012.  From the moment the first person arrived at the event to the final minutes of my closing remarks (where I almost lost my voice – I ended up being diagnosed with bronchitis the next day!), it was an awesome event.  I’m glad to have been part of bringing something like this to Pittsburgh…and I’m looking forward to Pittsburgh TechFest 2013.  See you there!

    Read the article

  • Parse.com REST API in Java (NOT Android)

    - by Orange Peel
    I am trying to use the Parse.com REST API in Java. I have gone through the 4 solutions given here https://parse.com/docs/api_libraries and have selected Parse4J. After importing the source into Netbeans, along with importing the following libraries: org.slf4j:slf4j-api:jar:1.6.1 org.apache.httpcomponents:httpclient:jar:4.3.2 org.apache.httpcomponents:httpcore:jar:4.3.1 org.json:json:jar:20131018 commons-codec:commons-codec:jar:1.9 junit:junit:jar:4.11 ch.qos.logback:logback-classic:jar:0.9.28 ch.qos.logback:logback-core:jar:0.9.28 I ran the example code from https://github.com/thiagolocatelli/parse4j Parse.initialize(APP_ID, APP_REST_API_ID); // I replaced these with mine ParseObject gameScore = new ParseObject("GameScore"); gameScore.put("score", 1337); gameScore.put("playerName", "Sean Plott"); gameScore.put("cheatMode", false); gameScore.save(); And I got that it was missing org.apache.commons.logging, so I downloaded that and imported it. Then I ran the code again and got Exception in thread "main" java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;ILjava/lang/String;Ljava/lang/Throwable;)V at org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwareLog.java:120) at org.apache.http.client.protocol.RequestAddCookies.process(RequestAddCookies.java:122) at org.apache.http.protocol.ImmutableHttpProcessor.process(ImmutableHttpProcessor.java:131) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:193) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:85) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) at org.parse4j.command.ParseCommand.perform(ParseCommand.java:44) at org.parse4j.ParseObject.save(ParseObject.java:450) I could probably fix this with another import, but I suppose then something else would pop up. I tried the other libraries with similar results, missing a bunch of libraries. Has anyone actually used REST API successfully in Java? If so, I would be grateful if you shared which library/s you used and anything else required to get it going successfully. I am using Netbeans. Thanks.

    Read the article

< Previous Page | 19 20 21 22 23 24 25  | Next Page >