Search Results

Search found 28476 results on 1140 pages for 'information architecture'.

Page 405/1140 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • The Healthy Tension That Mobility Creates

    - by Kathryn Perry
    A guest post by Hernan Capdevila, Vice President, Oracle Fusion Apps In my previous post, I talked about the value of the mobile revolution on businesses and workers. Now let me put on a different hat and view the world from the IT department and the IT leader’s viewpoint. The IT leader has different concerns – around privacy, potential liability of information leakage, and intellectual property protection. These concerns and the leader’s goals create a healthy tension with the users. For example, effective device management becomes a must have for the IT leader, especially if you look at the Android ecosystem as an example. There are benefits to the Android strategy, but there are also drawbacks, such as uniformity – in device management, in operating systems, and in the application taxonomy and capabilities. Whereas, if you compare Android to iOS, Apple's operating system, iOS is more unified, more streamlined, and easier to manage. In either case, this is where mobile device management in the cloud makes good sense. I don't think IT departments should be hosting device management and managing that complexity. It should be a cloud service and I predict it's going to be key for our customers. A New Focus for IT Departments So where does that leave the IT departments? I think their futures are in governance, which is a more strategic play than a tactical one. Device management is tactical and it's the “now” topic. But the mobile phenomenon, if you will, is going to drive significant change in terms of how IT plans, hosts, and deploys enterprise applications. For example, opening up enterprise applications for mobile users presents some challenges unless you deploy more complicated network topologies, such as virtual private networks and threat protection technology. If you really want employees to be mobile you need to remove those kinds of barriers. But I don’t think IT departments want to wrestle with exposing their private enterprise data centers and being responsible for hosted business applications – applications in a sense that they’re making vulnerable to the public world. This opens up a significant need and a significant driver for cloud applications. However, it's not just about taking away the complexity – it's also about taking away the responsibility. Why should every business have to carry the responsibility and figure out all the nuts and bolts of how to protect themselves in this public, mobile world? When you use apps in the cloud, either your vendor or your hosting partner should have figured all that out. They need to assure the business that they are adhering to all sorts of security and compliance regulations so users can be connected and have access to information anywhere anytime. More Ideas and Better Service What’s more interesting is the world of possibilities that the connected, cloud-based world enables. I believe that the one-size-fits-all, uber-best practices, lowest-common denominator-like capabilities will go away. IT will now be able to solve very specific business challenges for the different corporate functions it serves. In this new world, IT will play a key role in enabling different organizations within a company to be best in class and delivering greater value to the line of business managers. IT will actually help to differentiate. Net result is a more agile workforce and business because each department is getting work done its own way.

    Read the article

  • On The Road with the HR Community

    - by Kathryn Perry
    A guest post by Steve Boese, Director, Talent Strategy, Oracle One of the best ways to connect with and to get a feel for what is on the minds of Human Resources leaders is to get out of the office and hit the road. I’ve had the great honor to attend and/or present at a number of events recently, including the massive SHRM Annual Conference, the HR Florida Conference, and Taleo World in Chicago. These events, and many others, offer solution providers, talent management professionals, business leaders, and even more casual observers of the Human Resources field with tremendous opportunities to connect, to share information, and to learn from each other. Attending the conferences also give people a sense of how they can improve and enhance their skills and knowledge, learn about the latest workforce technologies, and bring new and innovative ideas back to their organizations. And sure, the parties and conference swag can be pretty nice as well! If you attend a few of these industry events, one of the most beneficial by-products that you can emerge with -- whether you are on the front lines in HR at your organization, or as we are at Oracle, in the business of developing and delivering innovative and impactful technology solutions to our customers -- is to get a larger sense of the big ideas and major trends, concerns, and challenges facing organizations all across the landscape, and to be able to better understand how your strategies and solutions can be improved with this greater perspective. So what are HR folks discussing and debating? What questions and problems keep them up at night? What are the bloggers and large community of HR social media enthusiasts buzzing about? From my perspective some of the common themes you see over and again across the HR community break down (broadly), into three main areas: Talent attraction - How can we locate, attract, recruit, and hire the best talent possible? What new strategies, approaches, and technologies can help us in this critically important area? What role do external social networks like LinkedIn, Facebook, and Twitter play in the increasingly competitive search for talent? Talent Retention - How can we make sure to keep that talent on our team? What engagement, development, recognition, and compensation tools can help us in this regard? How can we continue, (or become), an employer of choice? What is our unique and compelling employer value proposition? Talent Empowerment - How can we put our employees in the best position to succeed? What can we do to better align our talent with the organization’s mission and goals, while simultaneously providing the best and most driven to succeed individuals a clear path to achieve their career goals and aspirations? How can new technologies, particularly social and collaboration tools help in this area? While these are the ‘big themes’ that I know I have seen this year, certainly they are not really new, nor are they likely to fundamentally change in the next year or two. I think the reason is that at the core of any successful enterprise is a collection of smart, interested, engaged, challenged, and empowered group of people. And that was likely the case 10 or 20 years ago, and will probably be the case 10 or 20 years into the future. But what has changed, and what you can see -- evidenced by simply following the Twitter backchannel for an event and by reading some of the many fantastic HR blogs out there -- is that the HR professional's ability, along with technology solution providers like Oracle, to connect, to more openly share information with each other, and to make each other better in the process, (and to create new, improved, and more innovative solutions), has never been greater. And I think it is with this heretofore unprecedented level of opportunity to connect with other members of the community that HR professionals will be better equipped to help their organizations attract, retain, and empower their teams. We at Oracle HCM look forward to continuing to meet, engage, and connect with the HR community in the coming months. Until then -- follow us on Twitter and Facebook.

    Read the article

  • Upgrade issues due to broken "dependency problems prevent configuration of linux-image-generic" error

    - by tsukune1791
    okay, I've recently upgrade from 11.10 to 12.04 and I've been having some issues. I don't know if its a bug or not, but I thought I would submit it here. Okay here's a little background; I ran the distro update from the update manager and got a couple errors that I didn't catch. the computer restarted, and when I logged the Launcher and my top bar of the Ubuntu desktop didn't load. While it was trying to load a couple error messages came up, I think they were called "apport", saying they couldn't send the bug information for some reason. I believe it said somethings wrong with my internet connection, but nothing's wrong with it. Anyway I tried running some things in terminal, namely sudo apt-get -f install sudo apt-get upgrade sudo apt-get dist-upgrade and keep getting the following errors; dustin@marceau-laptop:~$ sudo apt-get dist-upgrade [sudo] password for dustin: Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up initramfs-tools (0.99ubuntu13) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-24-generic (3.2.0-24.37) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic run-parts: executing /etc/kernel/postinst.d/zz-runlilo 3.2.0-24-generic /boot/vmlinuz-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/kernel/postinst.d/zz-runlilo exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-24-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-24-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-24-generic; however: Package linux-image-3.2.0-24-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.24.26); however: Package linux-image-generic is not configured yet. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because the error message indicates its a followup error from a previous failure. update-initramfs: Generating /boot/initrd.img-3.2.0-24-generic Fatal: No images have been defined. run-parts: /etc/initramfs/post-update.d//runlilo exited with return code 1 dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-24-generic linux-image-generic linux-generic initramfs-tools localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB localepurge: Disk space freed in /usr/share/doc/kde/HTML: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) And my Ubuntu desktop is still not working. I can log into Gnome and Ubuntu 2D but the Launcher, I think it's call, doesn't load. Can someone help me fix these error, or point me in the right direction to get them fixed? It is much appriciated.

    Read the article

  • Finally! Entity Framework working in fully disconnected N-tier web app

    - by oazabir
    Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern. http://www.codeproject.com/KB/linq/ef.aspx In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems. You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

    Read the article

  • Visual Studio Talk Show #119 is now online - Quand et dans quel contexte est-ce que adéquat est adéq

    - by guybarrette
    http://www.visualstudiotalkshow.com Joel Quimper: Quand et dans quel contexte est-ce que «adéquat» est «adéquat»? Nous discutons avec Joel Quimper des pratiques de développement et de la mauvaise habitude qui consiste à vouloir tout abstraire et tout généraliser. Un application offre une valeur réelle seulement lorsqu’elle est utilisée par des utilisateurs. Alors ou tracer la limite entre le sur-design, l'extensibilité et la réutilisabilité. Joel Quimper est un conseiller en architecture chez Microsoft Canada. Il travaille essentiellement avec les architectes des grandes entreprises de l'Est du Canada pour aider leur organisation à réaliser leur plein potentiel. Joel possède une vaste expérience dans la conception de solutions orientées service en utilisant les services web. Il est passionné par l'interopérabilité avec la plateforme .NET. Avant de rejoindre Microsoft, il a travaillé 10 ans pour IBM Canada dans plusieurs rôles. Plus récemment, il a travaillé comme architecte d'intégration WebSphere. Il a travaillé avec plusieurs clients dans la mise en œuvre réussie de solutions SOA. var addthis_pub="guybarrette";

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • The Developers Conference 2012: Presentation about CEP & BAM

    - by Ricardo Ferreira
    This year I had the pleasure again of being one of the speakers in the TDC ("The Developers Conference") event. I have spoken in this event for three years from now. This year, the main theme of the SOA track was EDA ("Event-Driven Architecture") and I decided to delivery a comprehensive presentation about one of my preferred personal subjects: Real-time using Complex Event Processing. The theme of the presentation was "Business Intelligence in Real-time using CEP & BAM" and I would like to share here the presentation that I have done. The material is in Portuguese since was an Brazilian event that happened in São Paulo. Once my presentation has a lot of videos, I decided to share the material as a Youtube video, so you can pause, rewind and play again how many times you want it. I strongly recommend you that before starting watching the video, you change the video quality settings to 1080p in High Definition.

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • How to provide value?

    - by Francisco Garcia
    Before I became a consultant all I cared about was becoming a highly skilled programmer. Now I believe that what my clients need is not a great hacker, coder, architect... or whatever. I am more and more convinced every day that there is something of greater value. Everywhere I go I discover practices where I used to roll my eyes in despair. I saw the software industry with pink glasses and laughed or cried at them depending on my mood. I was so convinced everything could be done better. Now I believe that what my clients desperately need is finding a balance between good engineering practices and desperate project execution. Although a great design can make a project cheap to maintain thought many years, usually it is more important to produce quick fast and cheap, just to see if the project can succeed. Before that, it does not really matters that much if the design is cheap to maintain, after that, it might be too late to improve things. They need people who get involved, who do some clandestine improvements into the project without their manager approval/consent/knowledge... because they are never given time for some tasks we all know are important. Not all good things can be done, some of them must come out of freewill, and some of them must be discussed in order to educate colleagues, managers, clients and ourselves. Now my big question is. What exactly are the skills and practices aside from great coding that can provide real value to the economical success of software projects? (and not the software architecture alone)

    Read the article

  • Do you know about the Visual Studio ALM Rangers Guidance?

    - by Martin Hinshelwood
    I have been tasked with investigating the Guidance available around Visual Studio 2010 for one of our customers and it makes sense to make this available to everyone. The official guidance around Visual Studio 2010 has been created by the Visual Studio ALM Rangers and is a brew of a bunch of really clever guys experiences working with the tools and customers. I will be creating a series of posts on the different guidance options as many people still do not know about them even though Willy-Peter Schaub has done a fantastic job of making sure they get the recognition they deserve. There is a full list of all of the Rangers Solutions and Projects on MSDN, but I wanted to add my own point of view to the usefulness of each one. If you don’t know who the rangers are you should have a look at the Visual Studio ALM Rangers Index to see the full breadth of where the rangers are. All of the Rangers Solutions are available on Codeplex where you can download them and add reviews… Rangers Solutions and Projects Do you know about the Visual Studio 2010 Architecture Guidance? More coming soon… These solutions took a very long time to put together and I wanted to make sure that we all understand the value of the free time that member of The Product Team, Visual Studio ALM MVP’s and partners put in to make them happen.

    Read the article

  • Today's Links (6/22/2011)

    - by Bob Rhubart
    Presentations from the 4th International SOA Symposium + 3rd International Cloud Symposium Presentations from Thomas Erl, Anne Thomas Manes, Glauco Castro, Dr. Manas Deb, Juergen Kress, Paulo Mota, and many others. Experiencing the New Social Enterprise | Kellsey Ruppell Ruppell shares "some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference." Search-and-Rescue Technology Inspired by the Titanic | CIO.gov A look at the technology behind the US Coast Guard's Automated Mutual Assistance Vessel Rescue system. “He who does not understand history…" | The Open Group Blog "It’s down to us (IT folks and Enterprise Architects) to learn from history, to use methodologies intelligently, find ways to minimize the risk and get business buy-in". Observations in Migrating from JavaFX Script to JavaFX 2.0 | Jim Connors Connors' article "reflects on some of the observations encountered while porting source code over from JavaFX Script to the new JavaFX API paradigm." FY12 Partner Kickoff – Are you Ready? | Judson Althoff Blog What does Oracle have up its sleeve for FY12? Oracle executives reveal all in a live interactive event, June 28/29. Webcast: Walking the Talk: Oracle’s Use of Oracle VM for IaaS Event Date: 06/28/2011 9:00am PT / Noon ET. Speakers: Don Nalezyty (Dir. Enterprise Architecture, Oracle Global IT) and Adam Hawley (Senior Director, Virtualization, Product Management, Oracle).

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-09

    - by Bob Rhubart
    SOA Suite create partition in Enterprise Manager | Peter Paul van de Beek "In Oracle SOA Suite 10g, or more specific BPEL 10g, one could group functionality in domains," says Peter Paul van de Beek. "This feature has been away in the early versions of SOA Suite 11g. They have returned in more recent version and can be used for all SCA composites (instead of BPEL only). Nowadays these 10g domains are called partitions." OOW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. My presentations at Oracle Open World 2012 | Guido Schmutz The list of #OOW participants sharing their presentations grows with this post from Oracle ACE Director Guido Schmutz. You'll find Slideshare links to his presentations "Oracle Fusion Middleware Live Application Development (UGF10464)" and "Effective Fault handling in SOA Suite 11g (CON4832)." HTML Manifest for Content Folios | Kyle Hatlestad Kyle Hatlestad, solutions architect with the Oracle Fusion Middleware A-Team, shares the details on "a project to create a custom content folio renderer in WebCenter Content." Adaptive ADF/WebCenter template for the iPad | Maiko Rocha Oracle Fusion Middleware A-Team member Maiko Rocha responds to a a customer request for information about how to create an adaptive iPad template for their WebCenter Portal application, "a specific template to streamline their workflow on the iPad." Thought for the Day "I loved logic, math, computer programming. I loved systems and logic approaches. And so I just figured architecture is this perfect combination." — Maya Lin Source: Brainy Quote

    Read the article

  • Oracle Java Embedded Client 1.1 Released

    - by Roger Brinkley
    Yesterday an update release of Oracle Java Embedded Client (OJEC) 1.1 quietly slipped out door for general availability. Until last year it was pretty difficult to get your hands on either a Connected Limited Device Configuration (CLDC) for small devices or a Connected Device Configuration (CDC) for medium devices java implementation without a substantial initial commitment. But with the the release of OJWC (CLDC) and OJEC (CDC) last year that has changed. OJEC 1.1 is a binary distribution designed for installation on medium configurations which is a mid range processor requiring a  slow startup time, seamless upgrades, in a cost sensitive hardware environment  anywhere from 3.5mb to 8 mb. There are headless as well as headed versions available. It is intended for devices, such as Blu-­-ray Disc players, set-­-top boxes, residential gateways,VOIP phones, and similar. From a software point of view, OJEC is the Java runtime platform implementation of Connected Device Configuration (CDC v1.1, JSR-­-218), Foundation Profile (FP v1.1, JSR-­-219), and Personal Basis Profile (PBP v1.1, JSR-­-217)  and includes optional packages RMI (JSR 66), JDBC (JSR 169) and XML API for Java ME (JSR 280), and Java TV (JSR-­-927). New to this release is support for the XML API (JSR 280) and a number of bug fixes and performance enhancements, including an improved Just-in-Time (JIT) compilation for the x86 chipset architecture. The platforms supported include ArmV5, ArmV6/ArmV7, MIPS 32 74K, and X86 in headless mode. For embedded developers there are number of advantages to using Java and if you have shied away from the JavaME edition in the past I would encourage you to look into the updated version of OJEC 1.1.

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • ArchBeat Link-o-Rama for October 16, 2013

    - by OTN ArchBeat
    Coherence Special Interest Group (SIG) – Sydney, October 24th If you're in the neighborhood... The Coherence Special Interest Group (SIG) in Sydney, Australia will be held on Thursday October 24th at the Park Hyatt Sydney, in The Rocks, between 9am and 5pm. The event will include presentations from customers, partners, and Coherence engineering team members and product managers. Click the link for more info. OOW 2013 Summary for Fusion Middleware Architects & Administrators | Simon Haslam Oracle ACE Director Simon Haslam shares a very thorough and detailed summary of the most interesting news coming out of Oracle OpenWorld 2013 for Fusion Middleware architects and administrators. Webgate Reverse Proxy Farm | Vinay Kalra Vinay Kalra's blog post discusses architecture and recommendations for centralizing Webgate deployments onto a server farm. RDA 8.01 - Now A Better Experience for WebLogic Administrators | Daniel Mortimer Daniel Mortimer's post offers some background on RDA (Remote Diagnostic Agent) and a lot of tech tips on setting it up. Coherence Virtual Developer Day: November 5th This free online event includes sessions and hands-on labs focused on tooling updates and best practices for creating applications with WebLogic and Coherence as target platforms. November 5, 3013, 9am PT / Noon ET. Thought for the Day "Experience is simply the name we give our mistakes." — Oscar Wilde, Irish writer and poet (October 16, 1854 – November 30, 1900) Source: brainyquote.com

    Read the article

  • Museum of Modern Art Starts Video Game Collection; Acquires Myst, Pac-Man, and More

    - by Jason Fitzpatrick
    The Museum of Modern Art is weighing in on the video-games-as-art debate by starting a collection of iconic video games and putting them up for public display. Read on to see what games are included in the initial batch and the MoMA’s reasons behind starting a video game collection. Although the MoMA is slated to grow to over 40 titles, the seed batch is 14 titles including: Pac-Man, Tetris, Sim City 2000, Myst, Portal, and Dwarf Fortress. In the announcement they explain the motivation for building a video game collection: Are video games art? They sure are, but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design—a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity. Our criteria, therefore, emphasize not only the visual quality and aesthetic experience of each game, but also the many other aspects—from the elegance of the code to the design of the player’s behavior—that pertain to interaction design. In order to develop an even stronger curatorial stance, over the past year and a half we have sought the advice of scholars, digital conservation and legal experts, historians, and critics, all of whom helped us refine not only the criteria and the wish list, but also the issues of acquisition, display, and conservation of digital artifacts that are made even more complex by the games’ interactive nature. This acquisition allows the Museum to study, preserve, and exhibit video games as part of its Architecture and Design collection. The above quote is only a small snippet of a much lengthier look at the benefits of examining and preserving video games, hit up the link below to check out the full post including future titles the MoMA would like to include in their archive. Video Games: 14 in the Collection, for Starters [Inside/Out] How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • Partner OBI 11g 5-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 14 - 18 January 2013, Oracle Reading (UK) REGISTER HERE NOW This 5 day hands-on workshop provides attendees a hands-on experience to practice with OBI11g environment. Participants will gain in-depth understanding of new architecture of OBIEE 11g, security mode, installation/configuration as well as reporting aspects like new ROLAP/MOLAP style hierarchical browsing, new chart types, Action Framework and Visualization. Please note that attendees are required to have a laptop.  This training is only for OPN member Partners. View here laptop requirements and detailed agenda.

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • Advancing my Embedded knowledge.....with a CS degree.

    - by Mercfh
    So I graduated last December with a B.S. in Computer Science, in a pretty good well known engineering college. However towards the end I realized that I actually like Assembly/Lower level C programming more than I actually enjoy higher level abstracted OO stuff. (Like I Programmed my own Device Drivers for USB stuff in Linux, stuff like that) But.....I mean we really didn't concentrate much on that in college, perhaps an EE/CE degree would've been better, but I knew the classes......and things weren't THAT much different. I've messed around with Atmel AVR's/Arduino stuff (Mostly robotics) and Linux Kernals/Device Drivers. but I really want to enhance my skills and maybe one day get a job doing embedded stuff. (I have a job now, it's An entry level software dev/tester job, it's a good job but not exactly what my passion lies in) (Im pretty good with C and certain ASM's for specific microcontrollers) Is this even possible with a CS degree? or am I screwed? (since technically my degree usually doesn't involve much embedded stuff) If Im NOT screwed then what should I be studying/learning? How would I even go about it........ I guess I could eventually say "Experienced with XXXX Microcontrollers/ASM/etc...." but still, it wouldn't be the same as having a CE/EE degree. Also....going back to college isn't an option. just fyi. edit: Any book recommendations for "getting used to this stuff" I have ARM System-on-Chip Architecture (2nd edition) it's good.....for ARM stuff lol

    Read the article

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • Introducing the New Boot Framework in CE 7

    - by Kate Moss' Open Space
    CE 7 introduces a new boot loader framework, BLDR (platform\common\src\common\bldr\). Some people like its powerful and flexbility, others may feel its too complicate as a boot loader framework. Despite to the favor, it is already there; so let's take a look at its features. Unlike the previous BL framwork (CE7 still provides it in platform\common\src\common\boot\) is a monolithic library, the new framework has more architecture structure. It not only defines main body but also provides rich components, such as filesystem (BinFS/FAT), download transportations, display, logging and block devices: bios INT13, FAL, IDE, Flash ( and etc. Note that in the block device category, the FAL is for legacy FMD/FAL, Flash is for latest MSFlash. Some of you may have encountered MSFlash MDD/PDD compatible partition is hard to created in bootloader and now it provides a clean solution! (Since this is a big topic, I will introduce it in future post) Today, I am going to show you some basic helper components - Image Loading functions. When OS image stored in the block device, it can be a file format, says your NK.BIN in the FAT volume or a RAW format, says the image is programmed to a BINFS partition. For the first one you can use BootFileSystemReadBinFile (platform\common\src\common\bldr\fileSystem\utils\fileSystemReadBinFile.c) and use BootBlockLoadBinFsImage (platform\common\src\common\bldr\block\utils\loadBinFs.c) to load from a partition. Need a sample code? No problem, the BootLoaderLoadOs in platform\cepc\src\boot\bldr\loados.c just provide a perfect example.

    Read the article

  • How do I configure WakeOnUSB properly?

    - by wishi
    How do I configure Wake-On-USB properly on a 10.04 or 10.10 Ubuntu (2.6.36 and higher if needed)? (Wake-on-USB is when the computer is asleep and for example a USB Keyboard event wakes up the machine!) The notebook is an Acer Aspire Timeline X 1830T. I don't know in which way the Linux Kernel supports the controllers. There are different ways to approach this, for example /proc/acpi/wakeup... or UDEV... or something with HAL? /proc/acpi/wakeup shows every device in S4, but I need S3. Device S-state Status Sysfs node P0P2 S4 *disabled PEGP S4 *disabled P0P1 S0 *disabled pci:0000:00:1e.0 EHC1 S4 *disabled pci:0000:00:1d.0 USB1 S4 *enabled USB2 S4 *disabled USB3 S4 *disabled USB4 S4 *disabled EHC2 S4 *disabled pci:0000:00:1a.0 USB5 S4 *disabled USB6 S4 *disabled USB7 S4 *disabled HDEF S0 *disabled pci:0000:00:1b.0 RP01 S5 *disabled pci:0000:00:1c.0 PXSX S5 *disabled pci:0000:01:00.0 RP02 S0 *disabled pci:0000:00:1c.1 PXSX S5 *disabled pci:0000:02:00.0 RP03 S0 *disabled PXSX S5 *disabled RP04 S0 *disabled PXSX S5 *disabled RP05 S0 *disabled PXSX S5 *disabled RP07 S0 *disabled PXSX S5 *disabled RP08 S0 *disabled PXSX S5 *disabled GLAN S0 *disabled PEG3 S4 *disabled PEG5 S4 *disabled PEG6 S4 *disabled SLPB S3 *enabled S4, which is Suspend-To-Disk afaik... doesn't seem to work either if I echo USB1 into the wakeup table. It just sets an S4 flag. can I get the USB ports in S3? I want to make the machine wakeup from Suspend-To-Ram (S3, ACPI standard) in case a key on my external keyboard is pressed. It only wakes up if a key on the internal Laptop keyboard is pressed... from Suspend To Ram. It seems if I plug in a USB mouse, that the USB port isn't even powered. I have no BIOS option to change this. Further specific information regarding the device: usb-devices T: Bus=01 Lev=02 Prnt=02 Port=01 Cnt=01 Dev#= 13 Spd=1.5 MxCh= 0 D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=04d9 ProdID=1603 Rev=03.10 S: Manufacturer= S: Product=USB Keyboard C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=01 Prot=01 Driver=usbhid I: If#= 1 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbhid root@underwater-laptop:/# lsusb [...] Bus 001 Device 013: ID 04d9:1603 Holtek Semiconductor, Inc. Bus 001 Device 004: ID 0bda:0138 Realtek Semiconductor Corp. Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub [...] If this doesn't work I have to properly explain why :( - but I think it is very hard to research this kernel internal. Any hints for good information here? I hope it's possible... I'm just looking for any solution. edit: this, waking up on USB, works on Windows! Thanks a lot, Marius

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >