Search Results

Search found 11640 results on 466 pages for 'share credentials'.

Page 159/466 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Building Extensions Using E-Business Suite SDK for Java

    - by Sara Woodhull
    We’ve just released Version 2.0.1 of Oracle E-Business Suite SDK for Java.  This new version has several great enhancements added after I wrote about the first version of the SDK in 2010.  In addition to the AppsDataSource and Java Authentication and Authorization Service (JAAS) features that are in the first version, the Oracle E-Business Suite SDK for Java now provides: Session management APIs, so you can share session information with Oracle E-Business Suite Setup script for UNIX/Linux for AppsDataSource and JAAS on Oracle WebLogic Server APIs for Message Dictionary, User Profiles, and NLS Javadoc for the APIs (included with the patch) Enhanced documentation included with Note 974949.1 These features can be used with either Release 11i or Release 12.  References AppsDataSource, Java Authentication and Authorization Service, and Utilities for Oracle E-Business Suite (Note 974949.1) FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications (Doc ID 1296491.1) What's new in those references? Note 974949.1 is the place to look for the latest information as we come out with new versions of the SDK.  The patch number changes for each release.  Version 2.0.1 is contained in Patch 13882058, which is for both Release 11i and Release 12.  Note 974949.1 includes the following topics: Applying the latest patch Using Oracle E-Business Suite Data Sources Oracle E-Business Suite Implementation of Java Authentication and Authorization Service (JAAS) Utilities Error loggingSession management  Message Dictionary User profiles Navigation to External Applications Java EE Session Management Tutorial For those of you using the SDK with Oracle ADF, besides some Oracle ADF-specific documentation in Note 974949.1, we also updated the ADF Integration FAQ as well. EBS SDK for Java Use Cases The uses of the Oracle E-Business Suite SDK for Java fall into two general scenarios for integrating external applications with Oracle E-Business Suite: Application sharing a session with Oracle E-Business Suite Independent application (not shared session) With an independent application, the external application accesses Oracle E-Business  Suite data and server-side APIs, but it has a completely separate user interface. The external application may also launch pages from the Oracle E-Business Suite home page, but after the initial launch there is no further communication with the Oracle E-Business Suite user interface. Shared session integration means that the external application uses an Oracle E-Business Suite session (ICX session), shares session context information with Oracle E-Business Suite, and accesses Oracle E-Business Suite data. The external application may also launch pages from the Oracle E-Business Suite home page, or regions or pages from the external application may be embedded as regions within Oracle Application Framework pages. Both shared session applications and independent applications use the AppsDataSource feature of the Oracle E-Business Suite SDK for Java. Independent applications may also use the Java Authentication and Authorization (JAAS) and logging features of the SDK. Applications that are sharing the Oracle E-Business Suite session use the session management feature (instead of the JAAS feature), and they may also use the logging, profiles, and Message Dictionary features of the SDK.  The session management APIs allow you to create, retrieve, validate and cancel an Oracle E-Business Suite session (ICX session) from your external application.  Session information and context can travel back and forth between Oracle E-Business Suite and your application, allowing you to share session context information across applications. Note: Generally you would use the Java Authentication and Authorization (JAAS) feature of the SDK or the session management feature, but not both together. Send us your feedback Since the Oracle E-Business Suite SDK for Java is still pretty new, we’d like to know about who is using it and what you are trying to do with it.  We’d like to get this type of information: customer name and brief use case configuration and technologies (Oracle WebLogic Server or OC4J, plain Java, ADF, SOA Suite, and so on) project status (proof of concept, development, production) any other feedback you have about the SDK You can send me your feedback directly at Sara dot Woodhull at Oracle dot com, or you can leave it in the comments below.  Please keep in mind that we cannot answer support questions, so if you are having specific issues, please log a service request with Oracle Support. Happy coding! Related Articles New Whitepaper: Extending E-Business Suite 12.1.3 using Oracle Application Express To Customize or Not to Customize? New Whitepaper: Upgrading your Customizations to Oracle E-Business Suite Release 12 ATG Live Webcast: Upgrading your EBS 11i Customizations to Release 12

    Read the article

  • HP and Microsoft: That&rsquo;s What Friends Are For ?

    - by andrewbrust
    Today, HP pre-announced the second coming out for its recently acquired Palm webOS mobile operating system.  I happen to think webOS is quite good, and when the Palm Pre first came out, I thought it a worthwhile phone.  I was worried though that the platform would never attract the developer mindshare it needed to be competitive, and that turned out to be the case.  But then HP acquired Palm and announced it would be revamping the webOS offering, not only on phones, but also on tablets.  It later announced that it would also use webOS as an embedded solution on HP printers. The timing of this came shortly after HP had announced it would be producing a “Slate” product running Windows 7. After the Palm deal, HP became vague about whether the Windows-powered slate would actually come out.  They did, in fact, bring the Slate 500 to market, but by some accounts, they only built 5000 units. Another recent awkward moment between HP and Microsoft: HP withdrew itself from the Windows Home Server ecosystem.  That one hurt, as they were the dominant OEM there.  But Microsoft’s decision to kill Drive Extender had driven away many parties, not just HP. On Wednesday, HP came out with their TouchPad, and new phone models.  Not a nice thing for Windows Phone 7, but other OEMs are taking a wait and see attitude there too, I suppose.  There was one more zinger though, and it was bigger: HP announced they’d be porting webOS to PCs. No Windows Phone 7? OK. No Windows Home Server?  Whatcha gonna do?  But no Windows 7 either?  From HP?  What comes after that, no ink and toner? Some people think Microsoft’s been around too long to be relevant.  But HP started out making oscilloscopes!  The notion that HP is too cool for Windows school is a it far-fetched.  This is the company that bought EDS. This is the company that bought Compaq.  And Compaq was the company that bought Digital Equipment Corporation.  Somehow, I don’t think the VT 220 outclasses Windows PCs. What could possibly be going on?  My sense is that HP wants to put webOS on PCs that also have Windows, and that people will buy because they have Windows.  And for every one of those sold, HP gets to count, technically speaking, another webOS unit in the install base.  webOS is really nice, as I said.  But being good isn’t good enough when you are trying to get market share.  Number of units shipped matters.  The question is whether counting PCs with webOS installed, but dormant, is helpful to HP’s cause.  Seems like a funny way to account for market share, and a strange way to treat a big partner in Redmond.

    Read the article

  • Some thoughts on email hosting for one’s own domain

    - by jamiet
    I have used the same email providers for my own domains for a few years now however I am considering moving over to a new provider. In this email I’ll share my current thoughts and hopefully I’ll get some feedback that might help me to decide on what to do next. What I use today I have three email addresses that I use primarily (I have changed the domains in this blog post as I don’t want to give them away to spammers): [email protected] – My personal account that I give out to family and friends and which I use to register on websites [email protected]  - An account that I use to catch email from the numerous mailing lists that I am on [email protected] – I am a self-employed consultant so this is an account that I hand out to my clients, my accountant, and other work-related organisations Those two domains (jtpersonaldomain.com & jtworkdomain.com) are both managed at http://domains.live.com which is a fantastic service provided by Microsoft that for some perplexing reason they never bother telling anyone about. It offers multiple accounts (I have seven at jtpersonaldomain.com though as already stated I only use two of them) which are accessed via Outlook.com (formerly Hotmail.com) along with usage reporting plus a few other odds and sods that I never use. Best of all though, its totally free. In addition, given that I have got both domains hosted using http://domains.live.com I can link my various accounts together and switch between them at Outlook.com without having to login and logout: N.B. You’ll notice that there are two other accounts listed there in addition to the three I already mentioned. One is my mum’s account which helps me provide IT support/spam filtering services to her and the other is the donation account for AdventureWorks on Azure. I find that linking feature to be very handy indeed. Finally, http://domains.live.com is the epitome of “it just works”. I set up jtworkdomain.com at http://domains.live.com over three years ago and I am pretty certain I haven’t been back there even once to administer it. Proposed changes OK, so if I like http://domains.live.com so much why am I considering changing? Well, I earn my corn in the Microsoft ecosystem and if I’m reading the tea-leaves correctly its looking increasingly likely that the services that I’m going to have to be familiar with in the future are all going to be running on top of and alongside Windows Azure Active Directory and Office 365 respectively. Its clear to me that Microsoft’s are pushing their customers toward cloud services and, like it or lump it, data integration developers like me may have to come along for the ride. I don’t think the day is too far off when we can log into Windows Azure SQL Database (aka SQL Azure), Team Foundation Service, Dynamics etc… using the same credentials that are currently used for Office 365 and over time I would expect those things to get integrated together a lot better – that integration will be based upon a Windows Azure Active Directory identity. This should not come as a surprise, in my opinion Microsoft’s whole enterprise play over the past 15 or 20 years can be neatly surmised as “get people onto Windows Server and Active Directory then upsell from there” – in the not-too-distant-future the only difference is that they’re trying to do it in the cloud. I want to get familiar with these services and hence I am considering moving jtworkdomain.com onto Office 365. I’ll lose the convenience of easily being able to switch to that account at Outlook.com and moreover I’ll have to start paying for it (I think it’ll be about fifty quid a year – not a massive amount but its quite a bit more than free) but increasingly this is beginning to look like a move I have to make. So that’s where my head is at right now. Anyone have any relevant thoughts or experiences to share? Please let me know in the comments below. @Jamiet

    Read the article

  • Create Panoramic Photos with Windows Live Photo Gallery

    - by Matthew Guay
    Have you ever wanted to capture the view from a mountain or the full size of a building?  Here’s how you can stitch multiple shots together into the perfect panoramic picture for free with Windows Live Photo Gallery. Getting Started First, make sure you have Windows Live Photo Gallery installed (link below).  Live Photo Gallery is part of the Windows Live Essentials suite, you can select other programs to install along with it if you want. Make sure to uncheck setting your home page to MSN and setting your search provider as Bing if you don’t want them changed.   Now, make sure you have pictures that will work good for a panorama.  These need to be taken from the same spot, and the edges of the pictures need to overlap so the program can find where the pictures connect.  Here we have taken pictures inside a building with a cell phone camera. Make your Panorama Open Live Photo Gallery, and find the pictures you want to use in your panorama.  It will automatically index and display all of the photos in your Pictures folder or Library if you’re using Windows 7. If your pictures are saved elsewhere, add that folder to Photo Gallery.  Click File, Include a folder in the gallery, and select the correct folder at the prompt. Now select all of the pictures that you will use in your panorama.  You can easily do this by clicking the checkbox on each picture that appears when you hover over it.    Once all of the pictures are selected, click Make in the menu bar and select Create panoramic photo… Alternately, right-click on any of the pictures you’ve selected, and click Create panoramic photo… Live Photo Gallery will analyze your photos and compost them together to create a panorama.  The amount of time it takes will vary depending on the number of photos, size of the pictures, and computer speed. When it’s finished making the panorama, you’ll be prompted to enter a file name and save the picture. Your new panorama picture will open as soon as it’s saved.  Depending on your shots, the picture may have quite a bit of black space around the edges where each picture didn’t cover the exact same amount of area. To correct this, click Fix on the menu bar, and then select Crop Photo in the sidebar that opens. Select the center of the picture with the crop tool, and click Apply when you’ve got the selection you want. Live Photo Gallery automatically saves your picture changes, and you can revert back to the original picture if you wish. Now you’ve got a nice panoramic shot, trimmed and ready to print, share, and more. Conclusion Panoramic shots are great ways to capture your whole surroundings, whether it’s a sports stadium, mall, or a scenic mountain view.  They can also be a great way to capture more with low-resolution cameras. Link Download Windows Live Photo Gallery Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesLearning Windows 7: Manage Photos with Live Photo GalleryEasily Re-Size Photos in Windows Vista or XPInstall Windows Live Essentials In Windows 7Convert Photos to Flash for Your Website TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • A Guide to Fusion SCM at Oracle OpenWorld 2012

    - by Pam Petropoulos
    Are you attending next week’s Oracle OpenWorld 2012 conference? Then you won’t want to miss the Fusion SCM activities and customer presenters from leading companies like Boeing and Fideltronik. Below you’ll find a day by day guide of the various Fusion SCM sessions, demos and activities during OpenWorld 2012, September 30 – October 4 in San Francisco, CA. Tuesday, October 2 All of the Fusion SCM sessions during OpenWorld will take place in various rooms at Moscone West, a convenience you are sure to appreciate, as will your feet.   The first session at 10:15 – 11:15 am (Moscone West, Room 2006), entitled “Oracle Fusion Supply Chain Management: Overview, Strategy, Customer Experiences, and Roadmap”, provides an overview of Fusion Supply Chain Management applications and will discuss Fusion SCM strategy, future roadmap, and highlights of customer examples. The next session at 11:45 am – 12:45 pm (Moscone West, Room 2022), entitled “Enabling Trusted Enterprise Product Data with Oracle Fusion Product Hub”, may be the session for you if you’re struggling with achieving consistent, high-quality product data that provides significant business value. This session will discuss how Oracle Fusion Product Hub and Oracle Enterprise Data Quality can help you to achieve this vision. A customer presenter from Fideltronik will share their experiences with Oracle Fusion Product Hub. At the end of the day unwind at the Supply Chain Management customer reception from 6:00 – 8:00 pm at the Roe Lounge, located at 651 Howard Street. Registration is required. Click here for details. Wednesday, October 3 Wednesday is a busy day with three Fusion SCM sessions on the agenda. Start your day at 10:15 am at the “Oracle Fusion Supply Chain Management: Customer Adoption and Experiences” session (Moscone West, Room 2003).  This must see session will showcase customer speakers from The Boeing Company and Fideltronik, each of whom will share their company’s experiences in selecting and implementing Fusion SCM applications. If you’re wondering how Fusion SCM applications can co-exist with your existing Oracle applications, then you’ll want to sit in on the 3:30 pm session entitled “Oracle Fusion Supply Chain Management: Coexistence with Other Oracle Applications” (Moscone West, Room 2003). Stick around until 5:00 pm for the final Fusion SCM session of the day entitled “Responsive Fulfillment with Oracle Fusion Supply Chain Management” (Moscone West, Room 2001).  This session will showcase Oracle Fusion Distributed Order Orchestration and Oracle Fusion Global Order Promising and how they are changing the way companies manage order fulfillment in environments. In addition to discussing the current business challenges, product capabilities, value propositions, industry applicability, and future roadmap this session will also feature a customer presenter from The Boeing Company. Thursday, October 4 If you are a retail customer we highly recommend that you attend the final Fusion SCM session of the week at 12:45 pm, entitled “Multichannel Fulfillment Excellence in the Direct-to-Consumer Market” (Moscone West, Room 2024).  Retailers will learn how they can transform their supply chains to meet the ever-increasing demands of buy anywhere/get anywhere cross-channel requirements with Fusion Distributed Order Orchestration and Oracle Fusion Product Hub. Throughout the week, you’ll also want to visit the Fusion SCM demo pods at the Demogrounds in Moscone West so you can see demos of these Fusion applications. Visit pod W-005 for Fusion Distributed Order Orchestration, W-008 for Fusion Inventory and Cost Management, and W-006 for Fusion Product Hub. Click here for the Demogrounds map. A reminder that you can also pre-register for these sessions to secure your spot. Visit the Schedule Builder to pre-enroll for these sessions. Finally, you'll also want to check out the Fusion SCM FocusOn document which includes additional keynote and general sessions that you may want to attend throughout the week.   We look forward to seeing you in San Francisco next week.

    Read the article

  • Oracle WebCenter: Extending Oracle Applications & Oracle Fusion Applications

    - by kellsey.ruppel(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} -- Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter extends Oracle Applications and Fusion Applications.When we talk about the new release of Oracle WebCenter, we really emphasize to customers that they can leverage their existing investments and benefit from WebCenter’s Complete, Open and Integrated platform. To summarize what we mean here, Oracle WebCenter is:COMPLETEComprehensive platform for Portals/Websites, Composite Applications with integrated Social/Collaboration services and Content Management infrastructureOPENStandards support improves reuse of existing resources and extends the value of existing systemsINTEGRATEDImplicit integration with Oracle Applications, Oracle Fusion Applications & other enterprise applicationsWith all the existing enterprise applications in Oracle’s application portfolio, in the new release of WebCenter we’ve got a set of pre-built catalogs that customers can use directly to get at all the portlet resources certified and available from Oracle.  It provides customers with a ready-to-use view of their application resources.  And since WebCenter provides seamless support for building these portlets/components in a professional IDE like JDeveloper or from within a Browser, developers and business analysts can quickly assemble the information they require for their existing application investment.  In addition, we’ve taken all the user flows and patterns that we’ve learned in building Fusion Applications and focused on making it dramatically easier to use tools to create reusable application UI components. In this way, one team in the organization using an application can share their components with other teams.  And more importantly, the new team can make changes to the component without breaking the original component.  When tied to enterprise applications, this capability is extremely powerful.  This is what Oracle means when they talk about Enterprise Mashups.  And finally, we’ve provided an innovative way to go well beyond traditional “on the glass” integration by enabling business transactions for the existing applications direct integration using activity streams. This delivers aggregated and “on time” delivery of information to the business users based on what‘s happening in the enterprise that is relevant to their particular job function.  Most importantly, it ties into the personalization interactions discussed earlier so that it can help target information to you directly based on past interactions.  Application integration is key to making businesses function more efficiently with these new Enterprise 2.0 technologies.Keep checking back this week as we share more information on how WebCenter is the most complete, open and integrated modern user experience platform and show key ways WebCenter can extend Oracle Applications & Oracle Fusion Applications.

    Read the article

  • St. Louis ALT.NET

    - by Brian Schroer
    I’m a huge fan of the St. Louis .NET User Group and a regular attendee of their meetings, but always wished there was a local group that discussed more advanced .NET topics. (That’s not a criticism of the group - I appreciate that they want to server developers with a broad range of skill levels). That’s why I was thrilled when Nicholas Cloud started a St. Louis ALT.NET group in 2010. Here’s the “about us” statement from the group’s web site: The ALT.NET community is a loosely coupled, highly cohesive group of like-minded individuals who believe that the best developers do not align themselves with platforms and languages, but with principles and ideas. In 2007, David Laribee created the term "ALT.NET" to explain this "alternative" view of the Microsoft development universe--a view that challenged the "Microsoft-only" approach to software development. He distilled his thoughts into four key developer characteristics which form the basis of the ALT.NET philosophy: You're the type of developer who uses what works while keeping an eye out for a better way. You reach outside the mainstream to adopt the best of any community: Open Source, Agile, Java, Ruby, etc. You're not content with the status quo. Things can always be better expressed, more elegant and simple, more mutable, higher quality, etc. You know tools are great, but they only take you so far. It's the principles and knowledge that really matter. The best tools are those that embed the knowledge and encourage the principles (e.g. Resharper.) The St. Louis ALT.NET meetup group is a place where .NET developers can learn, share, and critique approaches to software development on the .NET stack. We cater to the highest common denominator, not the lowest, and want to help all St. Louis .NET developers achieve a superior level of software craftsmanship. I don’t see a lot of ALT.NET talk in blogs these days. The movement was harmed early on by the negative attitudes of some of its early leaders, including jerk moves like the Entity Framework “vote of no confidence”, but I do see occasional mentions of local groups like the St. Louis one. I think ALT.NET has been successful at bringing some of its ideas into the .NET world, including heavily influencing ASP.NET MVC and raising the general level of software craftsmanship for developers working on the Microsoft stack. The ideas and ideals live on, they’re just not branded as “this is ALT.NET!” In the past 18 months, St. Louis ALT.NET meetups have discussed topics like: NHibernate F# and other functional languages AOP CoffeeScript “How Ruby Is Making Me a Stronger C# Developer” Using rake for builds CQRS .NET dynamic programming micro web frameworks – Nancy & Jessica Git ALT.NET doesn’t mean (to me, anyway) “alternatives to .NET”, but “alternatives for .NET”. We look at how things are done in Ruby and other languages/platforms, but always with the idea “What can I learn from this to take back to my “day job” with .NET?”. Meetings are held at 7PM on the fourth Wednesday of each month at the offices of Professional Employment Group. PEG is located at 999 Executive Parkway (Suite 100 – lower level) in Creve Coeur (South of Olive off of Mason Road - Here's a map). Food is not supplied (sorry if you’re a big fan of the Papa John’s Crust-Lovers’ Pizza that’s a staple of user group meetings), but attendees are encouraged to come early and bring/share beer, so that’s cool. Thanks to Nick for organizing, and to Professional Employment Group for lending their offices. Please visit the meetup site for more information.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • Industry perspectives on managing content

    - by aahluwalia
    Earlier this week I was noodling over a topic for my first blog post. My intention for this blog is to bring a practitioner's perspective on ECM to the community; to share and collaborate on best practices and approaches that address today's business problems. Reviewing my past 14 years of experience with web technologies, I wondered what topic would serve as a good "conversation starter". During this time, I received a call from a friend who was seeking insights on how content management applies to specific industries. She approached me because she vaguely remembered that I had worked in the Health Insurance industry in the recent past. She wanted me to tell her about the specific business needs of this industry. She was in for quite a surprise as she found out that I had spent the better part of a decade managing content within the Health Insurance industry and I discovered a great topic for my first blog post! I offer some insights from Health Insurance and invite my fellow practitioners to share their insights from other industries. What does content management mean to these industries? What can solution providers be aware of when offering solutions to these industries? The United States health care system relies heavily on private health insurance, which is the primary source of coverage for approximately 58% Americans. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance. In the late 20th century, traditional disability insurance evolved into modern health insurance programs. The first thing a solution provider must be aware of about the Health Insurance industry is that it tends to be transaction intensive. They are the ones who manage and administer our health plans and process our claims when we visit our health care providers. It helps to keep in mind that they are in the business of delivering health insurance and not technology. You may find the mindset conservative in comparison to the IT industry, however, the Health Insurance industry has benefited and will continue to benefit from the efficiency that technology brings to traditionally paper-driven processes. We are all aware of the impact that Healthcare reform bill has had a significant impact on the Health Insurance industry. They are under a great deal of pressure to explore ways to reduce their administrative costs and increase operational efficiency. Overall, administrative costs of health insurance include the insurer's cost to administer the health plan, the costs borne by employers, health-care providers, governments and individual consumers. Inefficiencies plague health insurance, owing largely to the absence of standardized processes across the industry. To achieve this, industry leaders have come together to establish standards and invest in initiatives to help their healthcare provider partners transition to the next generation of healthcare technology. The move to online services and paperless explanation of benefits are some manifestations of technological advancements in health insurance. Several companies have adopted Toyota's LEAN methodology or Six Sigma principles to improve quality, reduce waste and excessive costs, thereby increasing the value of their plan offerings. A growing number of health insurance companies have transformed their business systems in the past decade alone and adopted some form of content management to reduce the costs involved in administering health plans. The key strategy has been to convert paper documents and forms into electronic formats, automate the content development process and securely distribute content to various audiences via diverse marketing channels, including web and mobile. Enterprise content management solutions can enable document capture of claim forms, manage digital assets, integrate with Enterprise Resource Planning (ERP) and Human Capital Management (HCM) solutions, build Business Process Management (BPM) processes, define retention and disposition instructions to comply with state and federal regulations and allow eBusiness and Marketing departments to develop and deliver web content to multiple websites, mobile devices and portals. Content can be shared securely within and outside the organization using Information Rights Management.  At the end of the day, solution providers who can translate strategic goals into solutions that maximize process automation, increase ease of use and minimize IT overhead are likely to be successful in today's health insurance environment.

    Read the article

  • A (Late) Meme Monday Post: On SQLFamily

    - by Argenis
      Yesterday a member of the SQL community who I deeply admire sent me a DM on Twitter asking whether I had done a SQLFamily post for Thomas LaRock’s (blog|@SQLRockstar) Meme Monday for November. I replied that I did not, and I regretted not having done so. A subtle DM followed my response: “Get on it, you have all week”. And indeed I must. So here’s an attempt to express some of my feelings on a community that has catapulted my career like nothing else before I embraced it. Nanos Gigantium Humeris Insidentes I stand on the shoulders of giants. My SQLFamily has given me support at all levels. Professionally and personally. There is never a lack of will to help and provide advice to others in this community. And I do my best to help. On #SQLHelp on Twitter, via email, or even on the phone. I expect no retribution, because I know that when and if I do run into problems, my SQLFamily will be there for me. I have met some of the most humble, dedicated and most professional people in the SQL community. And some of them have pretty big titles: MVPs, MCMs, Regional Mentors, and even leaders of PASS, SQLCAT members, and even PMs and Devs on the SQL Server team. All are welcome, and that includes YOU! I have also met some people that are rather reserved and don’t participate as much in the community, for whatever reason. Be as it may, let it be know to all that we are a very welcoming community – heck, some of my closest friends and people I can count on in the community have completely opposite political views. We share one goal: to get better and help others get better. Even if you are a lurker – my hope is that one day you’ll decide to give back some of what you have learned. You have to take it to the next level On one of my previous jobs as an IT Supervisor I used to tell my team all the time about the benefits of continuous education and self-driven learning. Shortly after I left that job, the company went bankrupt and some of my staff got laid off – some without any severance pay whatsoever. I eventually found out that some of them had a really hard time finding another job, because their skills were simply outdated. They had become stale professionals. Don’t be one of them. If you don’t take advantage of these learning resources, somebody else will – and that person has an advantage over you when applying for that awesome job position that got opened. There’s a severe shortage of good DBAs and DB Devs out there. What’s your excuse for not being excellent? Even if your knowledge of SQL Server is at the beginner level, really – you have no excuse to get better. Just go to SQLUniversity and learn from there. Don’t get stale! Thank You To all of you in the SQL community who put so much time and energy into helping others, my deepest gratitude to you. I can’t wait to meet you all again at the next event and share our SQL stories over a pint of beer (or a shot of Jaeger) Cheers! -Argenis

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Oracle Data Integration 12c: Perspectives of Industry Experts, Customers and Partners

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 As you may have seen from our recent blog posts on Oracle Data Integrator 12c and Oracle GoldenGate 12c, we are very excited to share with you the great new features the 12c release brings to Oracle’s data integration solutions. And, fortunately we are not alone in this sentiment. Since the press announcement October 17th, which incorporates our customers' and experts' testimonials, we have seen positive comments in leading technology publications and social media as well. Here are some examples: In CIO and PCWorld you can find Joab Jackson’s article, Oracle Data Integrator 12c ready for real-time analysis, where wrote about the tight integration between Oracle Data Integrator and Oracle GoldenGate . He noted “Heeding the call from enterprise customers who clamor for more immediacy in their data-driven reports, Oracle has updated its data-integration software portfolio so that it can more rapidly deliver data to data warehouses and analysis applications.” Integration Developer News’ Vance McCarthy wrote the article Oracle Ships ‘Future Proofs’ Integration Tools for Traditional, Cloud, Big Data, Real-Time Projects and mentioned that “Oracle Data Integrator 12c and Oracle GoldenGate 12c sport a wide range of improvements to let devs more easily deliver data integration for cloud, analytics, big data and other new projects that leverage multiple datasets for business.“ InformationWeek’s Doug Henschen gave a great overview to several key features including the new flow-based UI in Oracle Data Integrator. Doug said “Oracle Data Integrator 12c introduces a complete makeover of the job-building experience, while real-time oriented GoldenGate 12c introduces performance gains “. In Database Trends and Applications’ article Oracle Strengthens Data Integration with Release of Oracle Data Integrator 12c and Oracle GoldenGate 12c highlighted the productivity aspect of the new solution with his remarks: “tight integration between Oracle Data Integrator 12c and Oracle GoldenGate 12c enables developers to leverage Oracle GoldenGate’s low overhead, real-time change data capture completely within the Oracle Data Integrator Studio without additional training”. We are also thrilled about what our customers and partners have to say about our products and the new release. And we are equally excited to share those perspectives with you in our upcoming launch video webcast on November 12th. SolarWorld Industries America’s Senior Database Manager, Russ Toyama will join our executives in our studio in Redwood Shores to discuss GoldenGate’s core benefits and the new release, while Surren Partharb, CTO of Strategic Technology Services for BT, and Mark Rittman, CTO of Rittman Mead, will provide their comments via the interviews conducted in the UK. This interactive panel discussion in the video webcast will unveil the new release with the expertise of our development executives and the great insight from our customers and partners. In addition, our product experts will be available online to answer chat questions. This is really a great opportunity to learn how Oracle's data integration offering has changed the integration and replication technology space with the new release, and established itself as the new leader. If you have not registered for this free event yet, you can do so via this link. We will run the live event at 8am PT/4pm GMT, followed by a replay of the event with live chat for Q&A  at 10am PT/6pm GMT. The replay will be available on-demand for those who register but cannot attend either session on November 12th. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • Terminal Colors Not Working

    - by Matt Fordham
    I am accessing an Ubuntu 10.04.2 LTS server via SSH from OSX. Recently the colors stopped working. I think it happened while I was installing/troubleshooting RVM, but I am not positive. In .bashrc I uncommeneted force_color_prompt=yes, and when I run env | grep TERM I get TERM=xterm-color. But still no colors. Any ideas? Thanks! Here is the output of cat .bashrc # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines in the history. See bash(1) for more options # ... or force ignoredups and ignorespace HISTCONTROL=ignoredups:ignorespace # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, unless we know we "want" color) case "$TERM" in xterm-color) color_prompt=yes;; esac # uncomment for a colored prompt, if the terminal has the capability; turned # off by default to not distract the user: the focus in a terminal window # should be on the output of commands, not on the prompt force_color_prompt=yes if [ -n "$force_color_prompt" ]; then if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then # We have color support; assume it's compliant with Ecma-48 # (ISO/IEC-6429). (Lack of such support is extremely rare, and such # a case would tend to support setf rather than setaf.) color_prompt=yes else color_prompt= fi fi if [ "$color_prompt" = yes ]; then PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ ' else PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' fi unset color_prompt force_color_prompt # If this is an xterm set the title to user@host:dir case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' alias dir='dir --color=auto' alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # some more ls aliases alias ll='ls -alF' alias la='ls -A' alias l='ls -CF' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi [[ -s "/usr/local/rvm/scripts/rvm" ]] && . "/usr/local/rvm/scripts/rvm"

    Read the article

  • How to "apt-get -f install" without deleting software?

    - by Jeggy
    I know Guitar pro doesn't support 64 bit, but i did get it to work with this command jeggy@jeggy-XPS:~$ sudo dpkg --force-architecture -i GuitarPro6-rev9063.deb [sudo] password for jeggy: Selecting previously unselected package guitarpro6:i386. (Reading database ... 285729 files and directories currently installed.) Unpacking guitarpro6:i386 (from GuitarPro6-rev9063.deb) ... dpkg: dependency problems prevent configuration of guitarpro6:i386: guitarpro6:i386 depends on gksu. dpkg: error processing guitarpro6:i386 (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Errors were encountered while processing: guitarpro6:i386 And even after i get that error the program perfectly works fine and updating and adding PPA's to the system works great, but when I'm trying to install some other software i get this error: jeggy@jeggy-XPS:~$ sudo apt-get install elinks Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: elinks : Depends: libfsplib0 (>= 0.9) but it is not going to be installed Depends: liblua50 (>= 5.0.3) but it is not going to be installed Depends: liblualib50 (>= 5.0.3) but it is not going to be installed Depends: libtre5 but it is not going to be installed Depends: elinks-data (= 0.12~pre5-7ubuntu1) but it is not going to be installed guitarpro6:i386 : Depends: gksu:i386 but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). And whenever i write "apt-get -f install" i get this jeggy@jeggy-XPS:~$ sudo apt-get -f install [sudo] password for jeggy: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dconf-gsettings-backend:i386 python-levenshtein python-indicate libav-tools libstartup-notification0:i386 libxmuu1:i386 libavfilter-extra-2 libbabl-0.0-0 libgegl-0.0-0 libgconf2-4:i386 python-vobject libgtk-3-0:i386 libpam-cap:i386 python-utidylib libdconf0:i386 python-iniparse python-xmpp libpam-gnome-keyring:i386 libxcb-util0:i386 python-farstream Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: guitarpro6:i386 0 upgraded, 0 newly installed, 1 to remove and 7 not upgraded. 1 not fully installed or removed. After this operation, 84,0 MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 286979 files and directories currently installed.) Removing guitarpro6:i386 ... dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/updater' not empty so not removed. dpkg: warning: while removing guitarpro6:i386, directory '/opt/GuitarPro6/Data/Soundbanks' not empty so not removed. Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... And now Guitar Pro is deleted. How can i install Guitar Pro and still be able to install other software afterwards?

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • When to use Aspect Oriented Architecture (AOA/AOD)

    When is it appropriate to use aspect oriented architecture? I think the only honest answer to this question is that it depends on the context for which the question is being asked. There really are no hard and fast rules regarding the selection of an architectural model(s) for a project because each model provides good and bad benefits. Every system is built with a unique requirements and constraints. This context will dictate when to use one type of architecture over another or in conjunction with others. To me aspect oriented architecture models should be a sub-phase in the architectural modeling and design process especially when creating enterprise level models. Personally, I like to use this approach to create a base architectural model that is defined by non-functional requirements and system quality attributes.   This general model can then be used as a starting point for additional models because it is targets all of the business key quality attributes required by the system.Aspect oriented architecture is a method for modeling non-functional requirements and quality attributes of a system known as aspects. These models do not deal directly with specific functionality. They do categorize functionality of the system. This approach allows a system to be created with a strong emphasis on separating system concerns into individual components. These cross cutting components enables a systems to create with compartmentalization in regards to non-functional requirements or quality attributes.  This allows for the reduction in code because an each component maintains an aspect of a system that can be called by other aspects. This approach also allows for a much cleaner and smaller code base during the implementation and support of a system. Additionally, enabling developers to develop systems based on aspect-oriented design projects will be completed faster and will be more reliable because existing components can be shared across a system; thus, the time needed to create and test the functionality is reduced.   Example of an effective use of Aspect Oriented ArchitectureIn my experiences, aspect oriented architecture can be very effective with large or more complex systems. Typically, these types of systems have a large number of concerns so the act of defining them is very beneficial for reducing the system’s complexity because components can be developed to address each concern while exposing functionality to the other system components. The benefits to using the aspect oriented approach as the starting point for a system is that it promotes communication between IT and the business due to the fact that the aspect oriented models are quality attributes focused so not much technical understanding is needed to understand the model.An example of this can be in developing a new intranet website. Common Intranet Concerns: Error Handling Security Logging Notifications Database connectivity Example of a not as effective use of Aspect Oriented ArchitectureAgain in my experiences, aspect oriented architecture is not as effective with small or less complex systems in comparison.  There is no need to model concerns for a system that has a limited amount of them because the added overhead would not be justified for the actual benefits of creating the aspect oriented architecture model.  Furthermore, these types of projects typically have a reduced time schedule and a limited budget.  The creation of the Aspect oriented models would increase the overhead of a project and thus increase the time needed to implement the system. An example of this is seen by creating a small application to poll a network share for new files and then FTP them to a new location.  The two primary concerns for this project is to monitor a network drive and FTP files to a new location.  There is no need to create an aspect model for this system because there will never be a need to share functionality amongst either of these concerns.  To add to my point, this system is so small that it could be created with just a few classes so the added layer of componentizing the concerns would be complete overkill for this situation. References:Brichau, Johan; D'Hondt, Theo. (2006) Aspect-Oriented Software Development (AOSD) - An Introduction. Retreived from: http://www.info.ucl.ac.be/~jbrichau/courses/introductionToAOSD.pdf

    Read the article

  • SQL SERVER – Renaming Index – Index Naming Conventions

    - by pinaldave
    If you are regular reader of this blog, you must be aware of that there are two kinds of blog posts 1) I share what I learn recently 2) I share what I learn and request your participation. Today’s blog post is where I need your opinion to make this blog post a good reference for future. Background Story Recently I came across system where users have changed the name of the few of the table to match their new standard naming convention. The name of the table should be self explanatory and they should have explain their purpose without either opening it or reading documentations. Well, not every time this is possible but again this should be the goal of any database modeler. Well, I no way encourage the name of the tables to be too long like ‘ContainsDetailsofNewInvoices’. May be the name of the table should be ‘Invoices’ and table should contain a column with New/Processed bit filed to indicate if the invoice is processed or not (if necessary). Coming back to original story, the database had several tables of which the name were changed. Story Continues… To continue the story let me take simple example. There was a table with the name  ’ReceivedInvoices’, it was changed to new name as ‘TblInvoices’. As per their new naming standard they had to prefix every talbe with the words ‘Tbl’ and prefix every view with the letters ‘Vw’. Personally I do not see any need of the prefix but again, that issue is not here to discuss.  Now after changing the name of the table they faced very interesting situation. They had few indexes on the table which had name of the table. Let us take an example. Old Name of Table: ReceivedInvoice Old Name of Index: Index_ReceivedInvoice1 Here is the new names New Name of Table: TblInvoices New Name of Index: ??? Well, their dilemma was what should be the new naming convention of the Indexes. Here is a quick proposal of the Index naming convention. Do let me know your opinion. If Index is Primary Clustered Index: PK_TableName If Index is  Non-clustered Index: IX_TableName_ColumnName1_ColumnName2… If Index is Unique Non-clustered Index: UX_TableName_ColumnName1_ColumnName2… If Index is Columnstore Non-clustered Index: CL_TableName Here ColumnName is the column on which index is created. As there can be only one Primary Key Index and Columnstore Index per table, they do not require ColumnName in the name of the index. The purpose of this new naming convention is to increase readability. When any user come across this index, without opening their properties or definition, user can will know the details of the index. T-SQL script to Rename Indexes Here is quick T-SQL script to rename Indexes EXEC sp_rename N'SchemaName.TableName.IndexName', N'New_IndexName', N'INDEX'; GO Your Contribute Please Well, the organization has already defined above four guidelines, personally I follow very similar guidelines too. I have seen many variations like adding prefixes CL for Clustered Index and NCL for Non-clustered Index. I have often seen many not using UX prefix for Unique Index but rather use generic IX prefix only. Now do you think if they have missed anything in the coding standard. Is NCI and CI prefixed required to additionally describe the index names. I have once received suggestion to even add fill factor in the index name – which I do not recommend at all. What do you think should be ideal name of the index, so it explains all the most important properties? Additionally, you are welcome to vote if you believe changing the name of index is just waste of time and energy.  Note: The purpose of the blog post is to encourage all to participate with their ideas. I will write follow up blog posts in future compiling all the suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to Make the Gnome Panels in Ubuntu Totally Transparent

    - by The Geek
    We all love transparency, since it makes your desktop so beautiful and lovely—so today we’re going to show you how to apply transparency to the panels in your Ubuntu Gnome setup. It’s an easy process, and here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Making the Gnome Panels Transparent Of course we all love transparency, It makes your desktop so beautiful and lovely. So you go for enabling transparency in your panels , you right click on your panel, choose properties, go to the Background tab and make your panel transparent. Easy right? But instead of getting a lovely transparent panel, you often get a cluttered, ugly panel like this: Fortunately it can be easily fixed, all we need to do is to edit the theme files. If your theme is one of those themes that came with Ubuntu like Ambiance then you’ll have to copy it from /usr/share/themes to your own .themes directory in your Home Folder. You can do so by typing the following command in the terminal cp /usr/share/themes/theme_name ~/.themes Note: don’t forget to substitute theme_name with the theme name you want to fix. But if your theme is one you downloaded then it is already in your .themes folder. Now open your file manager and navigate to your home folder then do to .themes folder. If you can’t see it then you probably have disabled the “View hidden files” option. Press Ctrl+H to enable it. Now in .themes you’ll find your previously copied theme folder there, enter it then go to gtk-2.0 folder. There you may find a file named “panel.rc”, which is a configuration file that tells your panel how it should look like. If you find it there then rename it to “panel.rc.bak”. If you don’t find don’t panic! There’s nothing wrong with your system, it’s just that your theme decided to put the panel configurations in the “gtkrc” file. Open this file with your favorite text editor and at the end of the file there is line that looks like this “include “apps/gnome-panel.rc””. Comment out this line by putting a hash mark # in front of it. Now it should look like this “# include “apps/gnome-panel.rc”” Save and exit the text editor. Now change your theme to any other one then switch back to the one you edited. Now your panel should look like this: Stay tuned for the second part in the series, where we’ll cover how to change the color and fonts on your panels. Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • New Working Environment Starting November

    - by Jenson
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 This is actually a post dated update. After I’ve been working in the private sectors for so many years (the 2 years when I was working as IT trainer in a secondary school is not counted, as I was working under a contract with a private IT training agency), I’ve decided to try my luck into public sector. And fortunately, I passed the interview and I was offered a position of Web Administrator in a government statutory board, that’s Agency for Science, Technology and Research (of Singapore). During my previous employment with a Japanese MNC (multinational company), it was a totally new environment for me, as I had never worked for a Japanese company before, but the first time I work for Japanese company also gave me the very first nightmare I have with them, and vowed not to work for them anymore, and any other Japanese companies. No doubt I have freedom of choosing the tools and methods I wish to use for the projects, but the project management is simply too messy and out of order. And a lot of time, I don’t find that everyone is working as a team, more like achieving their own goals. Accountability for project is not shared, all lumped onto the shoulders of the developer in charge (they called it Software Engineer). I was working on a windows based .NET project, which I already voiced out that it’s not manageable by just 1 software engineer, but it seems like nobody cares, even the one who propose the solution to customer doesn’t care much. What he cares is whether you deliver the project on time so that he can please his customer and the senior management of his good work. Too many stories to tell, and I just simple doesn’t want to talk too much on this as it has already became the past to me. With my new title with the government agency, I hope to contribute my best to them, while learning as much as I can. I will share whatever I can on technologies, methodologies, and etc whichever I’m allowed and permitted to (of course, for those non-work-related stuff, I would be glad to share with you without much hesitation). Thank you! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • How to autostart Kupfer in Ubuntu 13.10?

    - by JJD
    I built Kupfer from source on Ubuntu 13.10 and installed it into ./local. In the preferences I checked Start automatically on login. However, Kupfer does not automatically start. The desktop file in ~/.local/share/applications/kupfer.desktop looks like this: [Desktop Entry] Version=1.0 Name=Kupfer Name[cs]=Kupfer Name[da]=Kupfer Name[de]=Kupfer Name[el]=Kupfer Name[es]=Kupfer Name[eu]=Kupfer Name[fr]=Kupfer Name[gl]=Kupfer Name[hu]=Kupfer Name[it]=Kupfer Name[ko]=?? Name[nb]=Kupfer Name[nl]=Kupfer Name[pl]=Kupfer Name[pt]=Kupfer Name[pt_BR]=Kupfer Name[ru]=Kupfer Name[sl]=Kupfer Name[sv]=Kupfer Name[tr]=Kupfer Name[zh_CN]=Kupfer GenericName=Application Launcher GenericName[ca]=Llançador d'aplicació GenericName[cs]=Spouštec aplikací GenericName[da]=Programopstarter GenericName[de]=Anwendungsstarter GenericName[el]=??????t?? efa?µ???? GenericName[es]=Lanzador de aplicaciones GenericName[eu]=Aplikazioen abiarazlea GenericName[fr]=Lanceur d'applications GenericName[gl]=Iniciador de aplicativos GenericName[hu]=Alkalmazásindító GenericName[it]=Lanciatore di applicazioni GenericName[ko]=?? ???? ?? ??? GenericName[nb]=Programstarter GenericName[nl]=Programmastarter GenericName[pl]=Aktywator programów GenericName[pt]=Lançador de Aplicações GenericName[pt_BR]=Lançador de aplicativos GenericName[ru]=???????? ??????? ?????????? GenericName[sl]=Zaganjalnik programov GenericName[sv]=Programstartare GenericName[tr]=Uygulama Çalistirici GenericName[zh_CN]=????? Comment=Convenient command and access tool for applications and documents Comment[cs]=Nástroj pro pohodlné provádení príkazu a prístup k aplikacím a dokumentum Comment[da]=Nemt kommando- og adgangsværktøj til programmer og dokumenter Comment[de]=Praktisches Befehls- und Zugriffswerkzeug für Anwendungen und Dokumente Comment[el]=?????? e??a?e?? e?t???? ?a? p??sßas?? ??a efa?µ???? ?a? ????afa Comment[es]=Herramienta para acceso y manejo de aplicaciones y documentos Comment[eu]=Komando eta atzipen tresna egokia aplikazio eta dokumentuentzat Comment[fr]=Outil pratique pour accéder à des documents et lancer des applications Comment[gl]=Ferramenta cómoda para controlar e acceder a aplicativos e documentos Comment[hu]=Kényelmes parancs és hozzáférési eszköz az alkalmazásokhoz és dokumentumokhoz Comment[it]=Comodo comando e strumento di accesso per applicazioni e documenti Comment[ko]=???? ???????? ??? ???? ??? ?? ? ?? ?? Comment[nb]=Praktiskt kommandoverktøy for programmer og dokumenter Comment[nl]=Handige opdracht- en toegangshulp voor programma's en documenten Comment[pl]=Wygodne narzedzie do uruchamiania programów i otwierania dokumentów Comment[pt]=Ferramenta conveniente para acesso e gestão de aplicações e documentos Comment[pt_BR]=Uma conveniente ferramenta de comando e acesso para aplicativos e documentos Comment[ru]=??????? ?????????? ??? ???????? ??????? ? ?????????? ? ?????????? Comment[sl]=Prikladno orodje za izvajanje ukazov in dostopa do programov in dokumentov Comment[sv]=Praktiskt kommandoprogram för åtkomst av program och dokument Comment[zh_CN]=???????????????? Icon=kupfer Exec=python /home/user/.local/share/kupfer/kupfer.py %F Type=Application Categories=Utility; StartupNotify=true X-UserData=$CONFIG/kupfer;$DATA/kupfer;$CACHE/kupfer Terminal=false

    Read the article

  • How can I fix my keyboard layout?

    - by Scott Severance
    For a long time, I've had my keyboard configured to use the layout currently known as "English (international AltGr dead keys)." I like this layout because without any modifier keys, it's identical to the US English keyboard, but when I hold Right Alt I can get accented letters and other characters not available on a standard US English keyboard. In Oneiric, however, the layout is messed up. Right Alt+N produces "ñ" as expected. And another method works: Right Alt+`, E produces "è", also as expected. But there's no way to type "é", which is probably the accented letter I type the most. I expect Right Alt+A, E to do the trick. But instead of a dead key for the acute accent, it uses a method for combining characters to create the hybrid "´e". This hybrid looks like the proper "é" in some settings, but it isn't the same character and doesn't always work. (For example, in the text input box as I type this, it looks the same as the proper character, but when displayed on the site for all so see, it looks very wrong--at least on my machine.) Ditto for all other characters with an acute accent, though some are available directly as pre-composed characters: For example, Right Alt+I yields "í". How can I change the acute accent on the A key to a proper dead key? Perhaps the more general version of this is: How can I tweak my keyboard layout? Update I just tested this on my other machine, also running Oneiric, but upgraded from previous versions. I have no problems with the second machine. The problem machine was a fresh install of Oneiric, but I kept my old $HOME when I did the fresh install. Clarification Even if an answer doesn't address my specific examples, I would still accept it if it provided enough detail for me to find the layout and tweak it according to my needs. Major Update After working through the information gained through Jim C's and Chascon's helpful replies, I've learned something new: The problem isn't with the layout itself, but with the fact that the selected layout isn't being applied. When I look at the definition in /usr/share/X11/xkb/symbols/us of the layout I've been running for a long time, I found that the definition doesn't match what I get when I type. In addition, the keyboard layout dialog that's supposed to show the current layout looks different from the way the layout is defined in the file I mentioned, and matches what actually happens when I type. Following Jim C's suggestion, I created a new layout in /usr/share/X11/xkb/symbols/us containing some modifications to the layout I want. I can select my layout from the keyboard properties, and I can use in on the console following Chascon's post, but the layout I get when typing is unchanged. Apparently, there's a different layout defined somewhere that's overriding what I've set. Where is that layout hiding? This problem occurs in Unity (3D and 2D), but I was able to get the correct layout set in Xfce. In case it's relevant, this problem has occurred since I installed Oneiric fresh on this machine (though I preserved my $HOME). I don't recall whether this problem occurred before the reinstall. Also, in case it's relevant, I also run iBus so I can type Korean. I have a few difficulties with iBus, but I doubt they're related.

    Read the article

  • On The Road with the HR Community

    - by Kathryn Perry
    A guest post by Steve Boese, Director, Talent Strategy, Oracle One of the best ways to connect with and to get a feel for what is on the minds of Human Resources leaders is to get out of the office and hit the road. I’ve had the great honor to attend and/or present at a number of events recently, including the massive SHRM Annual Conference, the HR Florida Conference, and Taleo World in Chicago. These events, and many others, offer solution providers, talent management professionals, business leaders, and even more casual observers of the Human Resources field with tremendous opportunities to connect, to share information, and to learn from each other. Attending the conferences also give people a sense of how they can improve and enhance their skills and knowledge, learn about the latest workforce technologies, and bring new and innovative ideas back to their organizations. And sure, the parties and conference swag can be pretty nice as well! If you attend a few of these industry events, one of the most beneficial by-products that you can emerge with -- whether you are on the front lines in HR at your organization, or as we are at Oracle, in the business of developing and delivering innovative and impactful technology solutions to our customers -- is to get a larger sense of the big ideas and major trends, concerns, and challenges facing organizations all across the landscape, and to be able to better understand how your strategies and solutions can be improved with this greater perspective. So what are HR folks discussing and debating? What questions and problems keep them up at night? What are the bloggers and large community of HR social media enthusiasts buzzing about? From my perspective some of the common themes you see over and again across the HR community break down (broadly), into three main areas: Talent attraction - How can we locate, attract, recruit, and hire the best talent possible? What new strategies, approaches, and technologies can help us in this critically important area? What role do external social networks like LinkedIn, Facebook, and Twitter play in the increasingly competitive search for talent? Talent Retention - How can we make sure to keep that talent on our team? What engagement, development, recognition, and compensation tools can help us in this regard? How can we continue, (or become), an employer of choice? What is our unique and compelling employer value proposition? Talent Empowerment - How can we put our employees in the best position to succeed? What can we do to better align our talent with the organization’s mission and goals, while simultaneously providing the best and most driven to succeed individuals a clear path to achieve their career goals and aspirations? How can new technologies, particularly social and collaboration tools help in this area? While these are the ‘big themes’ that I know I have seen this year, certainly they are not really new, nor are they likely to fundamentally change in the next year or two. I think the reason is that at the core of any successful enterprise is a collection of smart, interested, engaged, challenged, and empowered group of people. And that was likely the case 10 or 20 years ago, and will probably be the case 10 or 20 years into the future. But what has changed, and what you can see -- evidenced by simply following the Twitter backchannel for an event and by reading some of the many fantastic HR blogs out there -- is that the HR professional's ability, along with technology solution providers like Oracle, to connect, to more openly share information with each other, and to make each other better in the process, (and to create new, improved, and more innovative solutions), has never been greater. And I think it is with this heretofore unprecedented level of opportunity to connect with other members of the community that HR professionals will be better equipped to help their organizations attract, retain, and empower their teams. We at Oracle HCM look forward to continuing to meet, engage, and connect with the HR community in the coming months. Until then -- follow us on Twitter and Facebook.

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >