Search Results

Search found 10961 results on 439 pages for 'internal dns'.

Page 397/439 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Convert old AVI files to a modern format

    - by iWerner
    Hi, we have a collection of old home videos that were saved in AVI format a long time ago. I want to convert these files to a more modern format because the Totem Movie Player that comes with Ubuntu 10.4 seems to be the only program capable of playing them. The files seem to be encoded with a MJPEG codec, and playing them in VLC or Windows Media Player plays only the sound but there is no video. Avidemux was able to open the files, but the quality of the video is severely degraded: The video skips frames and is interlaced (it's not interlaced when playing it in Totem). Neither ffmpeg nor mencoder seems to be able to read the video stream. mencoder reports that it is using ffmpeg's codec. Here's a section from its output: ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family [mjpeg @ 0x92a7260]mjpeg: using external huffman table [mjpeg @ 0x92a7260]mjpeg: error using external huffman table, switching back to internal Unsupported PixelFormat -1 Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG) while running ffmpeg produces the following: $ ffmpeg -i input.avi output.avi FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 4 2010 12:35:30, gcc: 4.4.3 [avi @ 0x87952c0]non-interleaved AVI Input #0, avi, from 'input.avi': Duration: 00:00:15.24, start: 0.000000, bitrate: 22447 kb/s Stream #0.0: Video: mjpeg, yuvj422p, 720x544, 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 720x544, q=2-31, 200 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: mp2, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 0 fps= 0 q=0.0 Lsize= 143kB time=15.23 bitrate= 76.9kbits/s video:0kB audio:119kB global headers:0kB muxing overhead 20.101777% So the problem is that output does not contain any video, as evidenced by the video:0kB at the end. In all of the above cases the audio comes out fine. So my question is: What can I do to convert these files to a more modern format with more modern codecs?

    Read the article

  • Hardware wireless switch has no effect after suspend and 13.10 upgrade

    - by blaineh
    This seems to be a fairly chronic problem, as shown by the following questions: How do I fix a "Wireless is disabled by hardware switch" error? Wireless disabled by hardware switch "Wireless disabled by hardware switch" after suspend and other hardware buttons ineffective - how can I solve this? but no good solutions have been found! Wireless works fine after a reboot, but after a suspend the hardware switch (for my laptop this is f12) has no effect on the wireless, it is just permanently off, and shows that it is with a red LED. All My rfkill list all reads: 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: yes Any combination with rfkill <un>block wifi doesn't work, although one time first blocking then unblocking actually turned it on again. sudo lshw -C network reads: *-network DISABLED description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:65:2e:3f width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.11.0-12-generic firmware=N/A ip=155.99.215.79 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:90100000-9010ffff *-network DISABLED description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 02 serial: c8:0a:a9:89:b4:30 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:42 ioport:2000(size=256) memory:90010000-90010fff memory:90000000-9000ffff memory:90020000-9002ffff Also, adding a /etc/pm/sleep.d/brcm.sh file as recommended here simply prevents the laptop from suspending at all, which of course is no good. This question has an answer urging to install the original driver, but it wasn't an "accepted answer" so I'd rather not take a chance on it. Also I'll admit I'm a bit lost on that and would like help doing so with the specific information I've given. xev shows that no internal event is triggered for my wireless switch (f12), but other function keys also acting as hardware switches work fine. I would be happy to provide more information, so long as you're willing to help me find it for you! This is a very annoying bug. I have a Compaq Presario CQ62. Edit. I just tried to reload bios defaults (or something) as shown by this video. Didn't work. Edit. I tried the contents of this answer, and it didn't work. Edit. I made a pastebin of dmesg. I couldn't even begin to understand the contents. Edit. Output of lspci | grep Network: 02:00.0 Network controller: Qualcomm Atheros AR9285 Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • C++ property system interface for game editors (reflection system)

    - by Cristopher Ismael Sosa Abarca
    I have designed an reusable game engine for an project, and their functionality is like this: Is a completely scripted game engine instead of the usual scripting languages as Lua or Python, this uses Runtime-Compiled C++, and an modified version of Cistron (an component-based programming framework).to be compatible with Runtime-Compiled C++ and so on. Using the typical GameObject and Component classes of the Component-based design pattern, is serializable via JSON, BSON or Binary useful for selecting which objects will be loaded the next time. The main problem: We want to use our custom GameObjects and their components properties in our level editor, before used hardcoded functions to access GameObject base class virtual functions from the derived ones, if do you want to modify an property specifically from that class you need inside into the code, this situation happens too with the derived classes of Component class, in little projects there's no problem but for larger projects becomes tedious, lengthy and error-prone. I've researched a lot to find a solution without luck, i tried with the Ogitor's property system (since our engine is Ogre-based) but we find it inappropiate for the component-based design and it's limited only for the Ogre classes and can lead to performance overhead, and we tried some code we find in the Internet we tested it and worked a little but we considered the macro and lambda abuse too horrible take a look (some code omitted): IWE_IMPLEMENT_PROP_BEGIN(CBaseEntity) IWE_PROP_LEVEL_BEGIN("Editor"); IWE_PROP_INT_S("Id", "Internal id", m_nEntID, [](int n) {}, true); IWE_PROP_LEVEL_END(); IWE_PROP_LEVEL_BEGIN("Entity"); IWE_PROP_STRING_S("Mesh", "Mesh used for this entity", m_pModelName, [pInst](const std::string& sModelName) { pInst->m_stackMemUndoType.push(ENT_MEM_MESH); pInst->m_stackMemUndoStr.push(pInst->getModelName()); pInst->setModel(sModelName, false); pInst->saveState(); }, false); IWE_PROP_VECTOR3_S("Position", m_vecPosition, [pInst](float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_POSITION); pInst->m_stackMemUndoVec3.push(pInst->getPosition()); pInst->saveState(); pInst->m_vecPosition.Get()[0] = fX; pInst->m_vecPosition.Get()[1] = fY; pInst->m_vecPosition.Get()[2] = fZ; pInst->setPosition(pInst->m_vecPosition); }, false); IWE_PROP_QUATERNION_S("Orientation (Quat)", m_quatOrientation, [pInst](float fW, float fX, float fY, float fZ) { pInst->m_stackMemUndoType.push(ENT_MEM_ROTATE); pInst->m_stackMemUndoQuat.push(pInst->getOrientation()); pInst->saveState(); pInst->m_quatOrientation.Get()[0] = fW; pInst->m_quatOrientation.Get()[1] = fX; pInst->m_quatOrientation.Get()[2] = fY; pInst->m_quatOrientation.Get()[3] = fZ; pInst->setOrientation(pInst->m_quatOrientation); }, false); IWE_PROP_LEVEL_END(); IWE_IMPLEMENT_PROP_END() We are finding an simplified way to this, without leading confusing the programmers, (will be released to the public) i find ways to achieve this but they are only available for the common scripting as Lua or editors using C#. also too portable, we can write "wrappers" for different GUI toolkits as Qt or GTK, also i'm thinking to using Boost.Wave to get additional macro functionality without creating my own compiler. The properties designed to use in the editor they are removed in the game since the save file contains their data and loads it using an simple 'load' function to reduce unnecessary code bloat may will be useful if some GameObject property wants to be hidden instead. In summary, there's a way to implement an reflection(property) system for a level editor based in properties from derived classes? Also we can use C++11 and Boost (restricted only to Wave and PropertyTree)

    Read the article

  • Managing Social Relationships for the Enterprise – Part 2

    - by Michael Snow
    12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Reggie Bradford, Senior Vice President, Oracle  On September 13, 2012, I sat down with Altimeter Analyst Jeremiah Owyang to talk about how enterprise businesses are approaching the management of both their social media strategies and internal structures. There’s no longer any question as to whether companies are adopting social full throttle. That’s exactly the way it should be, because it’s a top online behavior across all age groups. For your consumers, it’s an ingrained, normal form of communication. And beyond connecting with friends, social users are reaching out for information and service from brands. Jeremiah tells us 29% of Twitter followers follow a brand and 58% of Facebook users have “Liked” a brand. Even on the B2B side, people act on reviews and recommendations. Just as in the early 90’s we saw companies move from static to dynamic web sites, businesses of all sizes are moving from just establishing a social presence to determining effective and efficient ways to use it. I like to say we’re in the 2nd or 3rd inning of a 9-inning game. Corporate social started out as a Facebook page, it’s multiple channels servicing customers wherever they are. Social is also moving from merely moderating to analyzing so that the signal can be separated from the noise, so that impactful influencers can be separated from other users. Organizationally, social started with the marketers. Now we’re getting into social selling, commerce, service, HR, recruiting, and collaboration. That’s Oracle’s concept of enterprise social relationship management, a framework to extend social across the entire organization real-time in as holistic a way as possible. Social requires more corporate coordination than ever before. One of my favorite statistics is that the average corporation at enterprise has 178 social accounts, according to Altimeter. Not all of them active, not all of them necessary, but 178 of them. That kind of fragmentation creates risk, so the smarter companies will look for solutions (as opposed to tools) that can organize, scale and defragment, as well as quickly integrate other networks and technologies that will come along. Our conversation goes deep into the various corporate social structures we’re seeing, as well as the advantages and disadvantages of each. There are also a couple of great examples of how known brands used an integrated, holistic approach to achieve stated social goals. What’s especially exciting to me is the Oracle SRM framework for the enterprise provides companywide integration into one seamless system. This is not a dream. This is going to have substantial business impact in the next several years.

    Read the article

  • World Record Siebel PSPP Benchmark on SPARC T4 Servers

    - by Brian
    Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers. The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users. This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour. The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers. Performance Landscape Systems Txn/hr Users Call Center Order Management Response Times (sec) 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web 3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway 1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB 239,748 29,000 0.165 0.925 Oracle: Call Center + Order Management Transactions: 197,128 + 42,620 Users: 20300 + 8700 Configuration Summary Web Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 10 8/11 iPlanet Web Server 7 Application Server Configuration: 3 x SPARC T4-2 servers, each with 2 x SPARC T4 processor, 2.85 GHz 256 GB memory 3 x 300 GB SAS internal disks Oracle Solaris 10 8/11 Siebel CRM 8.1.1.5 SIA Database Server Configuration: 1 x SPARC T4-1 server 1 x SPARC T4 processor, 2.85 GHz 128 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 (11.2.0.2) Storage Configuration: 1 x Sun Storage F5100 Flash Array 80 x 24 GB flash modules Benchmark Description Siebel 8.1 PSPP benchmark includes Call Center and Order Management: Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling. High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively. Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process. High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively. Key Points and Best Practices No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects. See Also Siebel White Papers SPARC T4-1 Server oracle.com OTN SPARC T4-2 Server oracle.com OTN Siebel CRM oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Impressions of Pivotal Tracker

    Pivotal Tracker is a free, online agile project management system. Ive been using it recently to better communicate to customers about the current state of our project. In Pivotal Tracker, the unit of work is a story and stories are arranged into iterations or delivery cycles. Stories can be any level of granularity you want, but the idea is to use stories to communicate clearly to customers, so you dont want to write a novel. You especially dont want to write a list of detailed programming tasks. A good story for a point of sale system might be: Allow managers to override the price of an item while ringing up a customer. A less useful story: Script out the process of adding a manager flag to the user table and stage that script into the deploy directory. Stories are estimated using a point scale, by default 1, 2 or 3. Iterations are then automatically laid out by combining enough tasks to fill the point total for that period of time. You have to start with a guess on how many points your team can do in an iteration, then adjust with real data as you complete iterations. This is basic agile methodology, but where Pivotal Tracker adds value is that it automatically and graphically lays out iterations for you on your project site. This makes communication and planning easy. Compiling release notes is no longer painful as it has been clear from the outset what work is going on. While I much prefer Pivotal Trackers customer facing interface over what we used previously (TFS), I see a couple of gaps. First, I have not able to make much headway with the reporting tools. Despite my complaints about TFS, it can produce some nice reports. Second, its not clear where if at all, Id keep track of purely internal tasks. Im talking about server maintenance, cleaning up source control, checking back on some code which you never quite felt right about. Theres no purpose in cluttering up an iteration backlog with these items, but if you dont track them, you lose them. Im not sure what a good answer for that is. One gap I thought Id see, which I dont, is more granular dev tasks. If Im implementing a story, Ill write out the steps and track my progress, but really, those steps arent useful to anybody but me. The only time Ive found that level of detail really useful is when my tasks are defined at too high a level anyway or when Im working with someone who needs more coaching and might not be able to finish a story in time without some scaffolding to get them going. You can learn more about Pivotal Tracker at: http://www.pivotaltracker.com/learnmore.   --- Relevant Links --- A good intro to stories: http://www.agilemodeling.com/artifacts/userStory.htmDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • Handling Configuration Changes in Windows Azure Applications

    - by Your DisplayName here!
    While finalizing StarterSTS 1.5, I had a closer look at lifetime and configuration management in Windows Azure. (this is no new information – just some bits and pieces compiled at one single place – plus a bit of reality check) When dealing with lifetime management (and especially configuration changes), there are two mechanisms in Windows Azure – a RoleEntryPoint derived class and a couple of events on the RoleEnvironment class. You can find good documentation about RoleEntryPoint here. The RoleEnvironment class features two events that deal with configuration changes – Changing and Changed. Whenever a configuration change gets pushed out by the fabric controller (either changes in the settings section or the instance count of a role) the Changing event gets fired. The event handler receives an instance of the RoleEnvironmentChangingEventArgs type. This contains a collection of type RoleEnvironmentChange. This in turn is a base class for two other classes that detail the two types of possible configuration changes I mentioned above: RoleEnvironmentConfigurationSettingsChange (configuration settings) and RoleEnvironmentTopologyChange (instance count). The two respective classes contain information about which configuration setting and which role has been changed. Furthermore the Changing event can trigger a role recycle (aka reboot) by setting EventArgs.Cancel to true. So your typical job in the Changing event handler is to figure if your application can handle these configuration changes at runtime, or if you rather want a clean restart. Prior to the SDK 1.3 VS Templates – the following code was generated to reboot if any configuration settings have changed: private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e) {     // If a configuration setting is changing     if (e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange))     {         // Set e.Cancel to true to restart this role instance         e.Cancel = true;     } } This is a little drastic as a default since most applications will work just fine with changed configuration – maybe that’s the reason this code has gone away in the 1.3 SDK templates (more). The Changed event gets fired after the configuration changes have been applied. Again the changes will get passed in just like in the Changing event. But from this point on RoleEnvironment.GetConfigurationSettingValue() will return the new values. You can still decide to recycle if some change was so drastic that you need a restart. You can use RoleEnvironment.RequestRecycle() for that (more). As a rule of thumb: When you always use GetConfigurationSettingValue to read from configuration (and there is no bigger state involved) – you typically don’t need to recycle. In the case of StarterSTS, I had to abstract away the physical configuration system and read the actual configuration (either from web.config or the Azure service configuration) at startup. I then cache the configuration settings in memory. This means I indeed need to take action when configuration changes – so in my case I simply clear the cache, and the new config values get read on the next access to my internal configuration object. No downtime – nice! Gotcha A very natural place to hook up the RoleEnvironment lifetime events is the RoleEntryPoint derived class. But with the move to the full IIS model in 1.3 – the RoleEntryPoint methods get executed in a different AppDomain (even in a different process) – see here.. You might no be able to call into your application code to e.g. clear a cache. Keep that in mind! In this case you need to handle these events from e.g. global.asax.

    Read the article

  • Oracle GoldenGate 11g Release 2 Launch Webcast Replay Available

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} For those of you who missed Oracle GoldenGate 11g Release 2 launch webcasts last week, the replay is now available from the following url. Harnessing the Power of the New Release of Oracle GoldenGate 11g I would highly recommend watching the webcast to meet many new features of the new release and hear the product management team respond to the questions from the audience in a nice long Q&A section. In my blog last week I listed the media coverage for this new release. There is a new article published by ITJungle talking about Oracle GoldenGate’s heterogeneity and support for DB2 for iSeries: Oracle Completes DB2/400 Support in Data Replication Tool As mentioned in last week’s blog, we received over 150 questions from the audience and in this blog I'd like to continue to post some of the frequently asked,  questions and their answers: Question: What are the fundamental differences between classic data capture and integrated data capture? Do both use the redo logs in the source database? Answer: Yes, they both use redo logs. Classic capture parses the redo log data directly, whereas the Integrated Capture lets the Oracle database parse the redo log record using an internal API. Question: Does GoldenGate version need to match Oracle Database version? Answer: No, they are not directly linked. Oracle GoldenGate 11g Release 2 supports Oracle Database version 10gR2 as well. For Oracle Database version 10gR1 and Oracle Database version 9i you will need GoldenGate11g Release 1 or lower. And for Oracle Database 8i you need Oracle GoldenGate 10 or earlier versions. Question: If I already use Data Guard, do I need GoldenGate? Answer: Data Guard is designed as the best disaster recovery solution for Oracle Database. If you would like to implement a bidirectional Active-Active replication solution or need to move data between heterogeneous systems, you will need GoldenGate. Question: On Compression and GoldenGate, if the source uses compression, is it required that the target also use compression? Answer: No, the source and target do not need to have the same compression settings. Question: Does GG support Advance Security Option on the Source database? Answer: Yes it does. Question: Can I use GoldenGate to upgrade the Oracle Database to 11g and do OS migration at the same time? Answer: Yes, this is a very common project where GoldenGate can eliminate downtime, give flexibility to test the target as needed, and minimize risks with fail-back option to the old environment. For more information on database upgrades please check out the following white papers: Best Practices for Migrating/Upgrading Oracle Database Using Oracle GoldenGate 11g Zero-Downtime Database Upgrades Using Oracle GoldenGate Question: Does GoldenGate create any trigger in the source database table level or row level to for real-time data integration? Answer: No, GoldenGate does not create triggers. Question: Can transformation be done after insert to destination table or need to be done before? Answer: It can happen in the Capture (Extract) process, in the  Delivery (Replicat) process, or in the target database. For more resources on Oracle GoldenGate 11gR2 please check out our Oracle GoldenGate 11gR2 resource kit as well.

    Read the article

  • Webcast Q&A: ResCare Solves Content Lifecycle Challenges with Oracle WebCenter

    - by Kellsey Ruppel
    Last week we had the fourth webcast in our WebCenter in Action webcast series, "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter", where customer Joe Lichtefeld from ResCare and Wayne Boerger & Doug Thompson from Oracle Partner TEAM Informatics shared how Oracle WebCenter is powering allowing ResCare to solve content lifecycle challenges, reduce compliance and business risks, and increase adoption of intranet as primary business communication tool In case you missed it, here's a recap of the Q&A.   Joe Lichtefeld, ResCare  Q: Did you run into any issues in the deployment of the platform?A: We experienced very few issues when implementing the content management and search functionalities. There were some challenges in determining the metadata structure. We tried to find a fine balance between having enough fields to provide the functionality needed, but trying to limit the impact to the contributing members.  Q: What has been the biggest benefit your end users have seen?A: The biggest benefit to date is two-fold. Content on the intranet can be maintained by the individual contributors more timely than in our old process of all requests being updated by IT. The other big benefit is the ability to find the most current version of a document instead of relying on emails and phone calls to track down the "current" version. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: We experienced very little resistance. Most of our community groups were eager to be able to contribute and maintain their information. We had the normal hurdles of training and follow-up training with implementing a new system and process. As our second phase rolled out access to all employees, we have received more positive feedback on the accessibility of information. Wayne Boerger & Doug Thompson, TEAM Informatics Q: Can you integrate multiple repositories with the Google Search Appliance? Yes, the Google Search Appliance is designed to index lots of different repositories, from both public and internal sources. There are included connectors to many repositories, such as SharePoint, databases, file systems, LDAP, and with the TEAM GSA Connector and the Oracle Content Server. And the index for these repositories can be configured into different collections depending on the use cases that each customer has, and really, for each need within a customer environment. Q: How many different filters can you add when the search results are returned? A: Presuming this question is about the filtering on the search results. You can add as many filters as you like and it can be done by collection or any number of other criteria. Most importantly, customers now have the ability to limit the returned content by a set metadata value. Q: With the TEAM Sites Connector, what types of content can you sync? A: There’s really no limit; if it can be checked into the content server, then it is eligible for sync into Sites.  So basically, any digital file that has relevance to a Sites implementation can be checked into the WC Content central repository and then the connector can/will manage it. Q: Using the Connector, are there any limitations around where in Sites that synced content can be used? A: There are no limitations about where it can be used. When setting up your environment to use it, you just need to think through the different destinations on the Sites side that might use the content; that way you’ve got the right information to create the rules needed for the connector. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  ResCare Solves Content Lifecycle Challenges with Oracle WebCenter from Oracle WebCenter

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • How do I cap rendering of tiles in a 2D game with SDL?

    - by farmdve
    I have some boilerplate code working, I basically have a tile based map composed of just 3 colors, and some walls and render with SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). I have pretty basic collision detection and it works, I can also detetc continuous presses, which allows me to move pretty much anywhere I want. I also have a moving camera, which follows the object. The problem is that, the tile based map is bigger than the resolution, thus not all of the map can be displayed on the screen, but it's still rendered. I would like to cap it, but since this is new to me, I pretty much have no idea. Although I cannot post all the code, as even though I am a newbie and the code pretty basic, it's already quite a few lines, I can post what I tried to do void set_camera() { //Center the camera over the dot camera.x = ( player.box.x + DOT_WIDTH / 2 ) - SCREEN_WIDTH / 2; camera.y = ( player.box.y + DOT_HEIGHT / 2 ) - SCREEN_HEIGHT / 2; //Keep the camera in bounds. if(camera.x < 0 ) { camera.x = 0; } if(camera.y < 0 ) { camera.y = 0; } if(camera.x > LEVEL_WIDTH - camera.w ) { camera.x = LEVEL_WIDTH - camera.w; } if(camera.y > LEVEL_HEIGHT - camera.h ) { camera.y = LEVEL_HEIGHT - camera.h; } } set_camera() is the function which calculates the camera position based on the player's positions. I won't pretend I know much about it. Rectangle box = {0,0,0,0}; for(int t = 0; t < TOTAL_TILES; t++) { if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) apply_surface(box.x - camera.x, box.y - camera.y, surface, screen, &clips[tiles[t]]); box.x += TILE_WIDTH; //If we've gone too far if(box.x >= LEVEL_WIDTH) { //Move back box.x = 0; //Move to the next row box.y += TILE_HEIGHT; } } This is basically my render code. The for loop loops over 192 tiles stored in an int array, each with their own unique value describing the tile type(wall or one of three possible colored tiles). box is an SDL_Rect containing the current position of the tile, which is calculated on render. TILE_HEIGHT and TILE_WIDTH are of value 80. So the cap is determined by if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) However, this is just me playing with the values and see what doesn't break it. I pretty much have no idea how to calculate it. My screen resolution is 1024/768, and the tile map is of size 1280/960.

    Read the article

  • My 2D collision code does not work as expected. How do I fix it?

    - by farmdve
    I have a simple 2D game with a tile-based map. I am new to game development, I followed the LazyFoo tutorials on SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). The game is simple, but the code is a lot so I can only post snippets. // Player moved out of the map if((player.box.x < 0)) player.box.x += GetVelocity(player, 0); if((player.box.y < 0)) player.box.y += GetVelocity(player, 1); if((player.box.x > (LEVEL_WIDTH - DOT_WIDTH))) player.box.x -= GetVelocity(player, 0); if((player.box.y > (LEVEL_HEIGHT - DOT_HEIGHT))) player.box.y -= GetVelocity(player, 1); // Now that we are here, we check for collisions if(touches_wall(player.box)) { if(player.box.x < player.prev_x) { player.box.x += GetVelocity(player, 0); } if(player.box.x > player.prev_x) { player.box.x -= GetVelocity(player, 0); } if(player.box.y < player.prev_y) { player.box.y += GetVelocity(player, 1); } if(player.box.y > player.prev_y) { player.box.y -= GetVelocity(player, 1); } } player.prev_x = player.box.x; player.prev_y = player.box.y; Let me explain, player is a structure with the following contents typedef struct { Rectangle box; //Player position on a map(tile or whatever). int prev_x, prev_y; // Previous positions int key_press[3]; // Stores which key was pressed/released. Limited to three keys. E.g Left,right and perhaps jump if possible in 2D int velX, velY; // Velocity for X and Y coordinate. //Health int health; bool main_character; uint32_t jump_ticks; } Player; And Rectangle is just a typedef of SDL_Rect. GetVelocity is a function that according to the second argument, returns the velocity for the X or Y axis. This code I have basically works, however inside the if(touches_wall(player.box)) if statement, I have 4 more. These 4 if statements are responsible for detecting collision on all 4 sides(up,down,left,right). However, they also act as a block for any other movement. Example: I move down the object and collide with the wall, as I continue to move down and still collide with the wall, I wish to move left or right, which is indeed possible(not to mention in 3D games), but remember the 4 if statements? They are preventing me from moving anywhere. The original code on the LazyFoo Productions website has no problems, but it was written in C++, so I had to rewrite most of it to work, which is probably where the problem comes from. I also use a different method of moving, than the one in the examples. Of course, that was just an example. I wish to be able to move no matter at which wall I collide. Before this bit of code, I had another one that had more logic in there, but it was flawed.

    Read the article

  • Microsoft Certifications &ndash; how to prep? and why?

    - by Kelly Jones
    I often get asked by my colleagues, “how do you prepare for Microsoft exams?” Well, the answer for me is a little complicated, so I thought I’d write up here what I do. The first thing I do is go to Microsoft’s website to find the exam that I need to take.  If you’re looking to get a particular certification, then their site lists the exam or exams that you’ll need to pass.  If you’ve already taken an exam, you can log onto the MCP website and use their certification planner.  This little tool tells you what tests you need, based on the exams you’ve already passed.  It is very helpful with the certifications that are multiple tests and especially ones that have electives. Once you’ve identified the test, you can use Microsoft’s website to see the topics that it covers.  This is a good outline to follow when you study.  I’ll keep this handy to reference back throughout my studying to make sure that I’m covering all the topics I need to know. The next step is probably where I am a little different from others.  IF the exam outline covers material that I’ve already been working with, then I’ll skip a lot of the studying and go directly to the practice tests.  However, if I’m looking at the outline and wondering how in the world do you do that? – then it’s time to hit the books. So, where to find study materials?  Try typing in the exam number into any search engine.  You’ll typically find a ton of resources.  If you’re lucky, you’ll find books that others recommend based on their studying and exam experience.  As a Sogeti employee, I have access to three really good resources: an internal company list of all of the consultants who have passed particular tests (on our Connex website), Books 24x7, and Transcender practice exams. Once my studying is done (either through books or experience), I’ll go through the practice exams.  I find them really helpful in getting my knowledge lined up to the thinking process that the exam writers use.  If I’m relying on my experience, then this really helps me to identify gaps in my knowledge that I’ll need to fill. That’s about it.  If I’m doing ok on the practice exams, then I’ll take the real thing.  I’ve found that the practice exams are usually more difficult than then real thing. Oh – one other thing I do related to Microsoft exams – I try to take any beta exams that Microsoft makes available that fall into my skill set.  Microsoft has started a blog to announce these and the seats usually fill up really quick.  The blog is at http://blogs.technet.com/betaexams/ . You don’t get your results instantly, like a normal exam, instead you have to wait for everyone to finish taking the beta exams and for Microsoft to determine which questions they are using and which they are dropping.  So, be prepared to wait six to eight weeks for your results.

    Read the article

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Ubiquitous BIP

    - by Tim Dexter
    The last number I heard from Mike and the PM team was that BIP is now embedded in more than 40 oracle products. That's a lot of products to keep track of and to help out with new releases, etc. Its interesting to see how internal Oracle product groups have integrated BIP into their products. Just as you might integrate BIP they have had to make a choice about how to integrate. 1. Library level - BIP is a pure java app and at the bottom of the architecture are a group of java libraries that expose APIs that you can use. they fall into three main areas, data extraction, template processing and formatting and delivery. There are post processing capabilities but those APIs are embedded withing the template processing libraries. Taking this integration route you are going to need to manage templates, data extraction and processing. You'll have your own UI to allow users to control all of this for themselves. Ultimate control but some effort to build and maintain. I have been trawling some of the products during a coffee break. I found a great post on the reporting capabilities provided by BIP in the records management product within WebCenter Content 11g. This integration falls into the first category, content manager looks after the report artifacts itself and provides you the UI to manage and run the reports. 2. Web Service level - further up in the stack is the web service layer. This is sitting on the BI Publisher server as a set of services, runReport and scheduleReport are the main protagonists. However, you can also manage the reports and users (locally managed) on the server and the catalog itself via the services layer.Taking this route, you still need to provide the user interface to choose reports and run them but the creation and management of the reports is all handled by the Publisher server. I have worked with a few customer on this approach. The web services provide the ability to retrieve a list of reports the user can access; then the parameters and LOVs for the selected report and finally a service to submit the report on the server. 3. Embedded BIP server UI- the final level is not so well supported yet. You can currently embed a report and its various levels of surrounding  'chrome' inside another html based application using a URL. Check the docs here. The look and feel can be customized but again, not easy, nor documented. I have messed with running the server pages inside an IFRAME, not bad, but not great. Taking this path should present the least amount of effort on your part to get BIP integrated but there are a few gotchas you need to get around. So a reasonable amount of choices with varying amounts of effort involved. There is another option coming soon for all you ADF developers out there, the ability to drop a BIP report into your application pages. But that's for another post.

    Read the article

  • Package management system corrupted. Cannot install or remove packages. U12.04LTS

    - by user271490
    Having read other posts, I believe that this may be less about samba than about update system. Below is the log file of the failed installation of Samba. I have been trying without success to install/outstall samba so that I could install anything else ... I cannot either install or remove samba using either update-manager or apt-get (nor indeed Software Centre). One of the errors that I have had to correct is the presence after "removal" (failed) of /usr/share/system-config-samba directory which finally allowed itself to be deleted. That, however was then ... I have U12.04LTS. running on release 63 because I allowed the upgrade to 64 this morning which fell over - no output to monitor - obviously even less support for my graphic chip than I am suffering already (see other posts in this forum). According to my interpretation of the dpkg returned errors there may be some problem with the package files, but if this is the case then it is on servers 'main', 'nantes uni fr' and 'best fr' at the very least if not everywhere. The suggestions offered at Package operation failed and elsewhere have not worked for me. This linked post suggests that a similar error is present in other packages, or that the error is in the 'update system' I have tried ... sudo apt-get remove samba ... autoremove ... install samba ... clean ... update -f all of the above In update-manager I have tried the "reload packages list" which fails to terminate because of the error. I have tried to install and remove samba from the software centre ... :( I am at a loss ... I need help, please! Firstly to recover my apt-get/update-manager/Software Centre so that I can at least carry on with my continuing installation - up to communicating with home network hence need for samba - which brings me to my second requirement ... samba. PS is the issue about "MaxReports" associated or apart? UPDATE! Being heartily sick of restarting FF every 5 seconds I thought I'd try again with Chromium ... and got the same errors from dpkg about corrupt compressed package - coincidence? Of course this was no longer in clipboard when I got here because apport has just errored ... AAARRRGGGH!!! Why does every error clear the clipboard? Thanks for any and all help!! installArchives() failed: Preconfiguring packages ... ... snip (Reading database ... ... snip (Reading database ... 184858 files and directories currently installed.) Unpacking samba (from .../samba_2%3a3.6.3-2ubuntu2.10_i386.deb) ... dpkg-deb (subprocess): data: internal gzip read error: ': data error' dpkg-deb: error: subprocess returned error exit status 2 dpkg: error processing /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 No apport report written because MaxReports is reached already Selecting previously unselected package system-config-samba. Unpacking system-config-samba (from .../system-config-samba_1.2.63-0ubuntu5_all.deb) ... Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Processing triggers for ufw ... Processing triggers for man-db ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: /var/cache/apt/archives/samba_2%3a3.6.3-2ubuntu2.10_i386.deb Error in function: dpkg: dependency problems prevent configuration of system-config-samba: system-config-samba depends on samba; however: Package samba is not installed. dpkg: error processing system-config-samba (--configure): dependency problems - leaving unconfigured

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • Field Report - Notes from IHRIM Atlanta Event

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Boese, Director, Talent Strategy, Oracle Recently I had the pleasure to serve as a guest speaker at the IHRIM Atlanta/SE Chapter meeting in Atlanta, Georgia. The focus of my talk was Mobile Technology in Human Resources, and while still a new and developing area, the enormous growth and ubiquitous presence of mobile devices and increasing importance of and demand for constant connectivity in both our personal and professional lives has put planning and developing a mobile HR technology strategy high on many organizations lists of priorities in 2012. Numerous studies have shown that the confluence of ever-rising sales of smartphones and tablets; and the increasing tendency for workers of all kinds to be more mobile and less tied down to traditional, fixed-location workplaces and what now seem like old-fashioned PC-centric and traditional computing environments are driving Human Resources leaders to think about how, where, when, and for whom that the deployment of mobile HR solutions will help them address their business needs, and put information in the hands of those that need it, when they need it, and on their preferred devices. In the session we talked about some of the potential opportunities for mobile HR technologies, from simple workflow-based approval capability, to employee directories and robust employee profiles, to more advanced use cases like internal social networking and location-based mobile recruiting applications. And truly we are just scratching the surface of the potential and the value that all kinds of HR-related mobile technologies will help deliver to enterprises in the coming years. Additionally, it was encouraging to talk with many of the HR leaders in attendance who expressed interest in these kinds of mobile HR technology opportunities, as well as to hear how some of them are already working on developing their own mobile strategies or experimenting with mobile solutions in their workforces. It was a fantastic meeting and I’d like to express my thanks to Kim Bryant, IHRIM Atlanta/SE Board President, the other board members, and also the IHRIM Atlanta Chapter members and attendees at the event. If you are in the Atlanta area and are interested in HR and HR Technology, you can learn more about the programs and services that the Chapter has to offer at their website - http://www.ihrimatlantase.org/. And for people that are interested in what we at Oracle are working on in mobile, you can also sign up to receive the latest updates about the Oracle Fusion Applications tablet solutions, Oracle Fusion Tap, at https://fusiontap.oracle.com/.

    Read the article

  • What is a good design pattern / lib for iOS 5 to synchronize with a web service?

    - by Junto
    We are developing an iOS application that needs to synchronize with a remote server using web services. The existing web services have an "operations" style rather than REST (implemented in WCF but exposing JSON HTTP endpoints). We are unsure of how to structure the web services to best fit with iOS and would love some advice. We are also interested in how to manage the synchronization process within iOS. Without going into detailed specifics, the application allows the user to estimate repair costs at a remote site. These costs are broken down by room and item. If the user has an internet connection this data can be sent back to the server. Multiple photographs can be taken of each item, but they will be held in a separate queue, which sends when the connection is optimal (ideally wifi). Our backend application controls the unique ids for each room and item. Thus, each time we send these costs to the server, the server echoes the central database ids back, thus, that they can be synchronized in the mobile app. I have simplified this a little, since the operations contract is actually much larger, but I just want to illustrate the basic requirements without complicating matters. Firstly, the web service architecture: We currently have two operations: GetCosts and UpdateCosts. My assumption is that if we used a strict REST architecture we would need to break our single web service operations into multiple smaller services. This would make the services much more chatty and we would also have to guarantee a delivery order from the app. For example, we need to make sure that containing rooms are added before the item. Although this seems much more RESTful, our perception is that these extra calls are expensive connections (security checks, database calls, etc). Does the type of web api (operation over service focus) determine chunky vs chatty? Since this is mobile (3G), are we better handling lots of smaller messages, or a few large ones? Secondly, the iOS side. What is the current advice on how to manage data synchronization within the iOS (5) app itself. We need multiple queues and we need to guarantee delivery order in each queue (and technically, ordering between queues). The server needs to control unique ids and other properties and echo them back to the application. The application then needs to update an internal database and when re-updating, make sure the correct ids are available in the update message (essentially multiple inserts and updates in one call). Our backend has a ton of business logic operating on these cost estimates. We don't want any of this in the app itself. Currently the iOS app sends the cost data, and then the server echoes that data back with populated ids (and other data). The existing cost data is deleted and the echoed response data is added to the client database on the device. This is causing us problems, because any photos might not have been sent, but the original entity tree has been removed and replaced. Obviously updating the costs tree rather than replacing it would remove this problem, but I'm not sure if there are any nice xcode libraries out there to do such things. I welcome any advice you might have.

    Read the article

  • Capgemini Global Business Process Management Report

    - by JuergenKress
    Welcome to the Capgemini Global Business Process Management (BPM) Report. This report is an exploration of key trends in BPM as seen by CXOs across a broad selection of sectors and geographies. BPM is perhaps at a tipping point - it’s certainly at an exciting stage in its evolution. As both an engineer and an Operational Research practitioner in my early career, and subsequently as a consultant, I have seen BPM through its development over the last 26 years. BPM has its roots in management practices such as Total Quality Management, Business Process Reengineering & Model Based Development; but the advent of the new generation of sophisticated modelling and process execution technologies has greatly enhanced BPM’s power to truly transform businesses. This has created one of the most rapidly growing and attractive market sectors for both services and technology. We see BPM as a critical management discipline that when executed against clear, cross organizational business objectives, can deliver exceptional value to that organization. However, we also see that the potential for BPM is not well understood. Our decision to conduct this global survey stemmed from discussions with our clients. We sought to gain a better impression of their understanding of BPM, how they measure its value, and how far it is prioritized within their Business and Technology Transformation efforts. This research confirms our belief that BPM needs to be a jointly owned Business and IT discipline. It also demonstrates that it is starting to gain significant traction in the market and investments are starting to pay dividends to the early adopters. At Capgemini we are being asked by our clients to help them simplify and improve their business models and the technology that supports them and we are already seeing BPM become an integral and key part of this proposition. Business Process Management is becoming ever more relevant to both large and small organizations in the current economic climate. At a time when many different market sectors are facing slow revenue growth, customer churn and increased pressures on costs, BPM becomes a critical weapon in the battle for efficiency and effectiveness in processes. Furthermore, in a challenging and changing business environment that is characterized by uncertainty, it allows organizations to adapt, be more agile and fleet of foot. Capgemini is seeing strong demand for BPM services in markets such as the USA, the UK, the Netherlands and France; and there are clear signs of increased interest in other geographies such as, Germany, Sweden, Spain, Italy and Australia. In sector terms, the financial services industry has led the way in BPM adoption over the recent past, driven by increased focus on customer- centricity and regulatory compliance. Other sectors, public sector, utilities, telco, retail and manufacturing are now not only catching up, but are starting o use BPM in new ways to create new business models to serve customers and outsmart the competition. The research findings also show however that this is a complex landscape, and we are not seeing adoption of BPM in a clear and consistent way. This report also looks at some of the barriers to adoption, with organizational silos being a major obstacle. Waters are further muddied by fragmented budgets, lack of clear governance and ownership and internal politics. The objective of our investment in this research project was to shed some light on these elements with a view to assisting organizations to create strategies that avoid or at least mitigate some of these barriers to success. Management of change in such endea vours is a key part in enabling the appropriate alignment of business and technology to support their transformation efforts. I hope that you find this report of benefit in the further adoption of Business Process Management. Get the full report here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Capgemini,bpm report,bpm market,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >