Search Results

Search found 29930 results on 1198 pages for 'email client'.

Page 491/1198 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Getting developers and support to work together

    - by Matt Watson
    Agile development has ushered in the norm of rapid iterations and change within products. One of the biggest challenges for agile development is educating the rest of the company. At my last company our biggest challenge was trying to continually train 100 employees in our customer support and training departments. It's easy to write release notes and email them to everyone. But for complex software products, release notes are not usually enough detail. You really have to educate your employees on the WHO, WHAT, WHERE, WHY, WHEN of every item. If you don't do this, you end up with customer service people who know less about your product than your users do. Ever call a company and feel like you know more about their product than their customer service people do? Yeah. I'm talking about that problem.WHO does the change effect?WHAT was the actual change?WHERE do I find the change in the product?WHY was the change made? (It's hard to support something if you don't know why it was done.)WHEN will the change be released?One thing I want to stress is the importance of the WHY something was done. For customer support people to be really good at their job, they need to understand the product and how people use it. Knowing how to enable a feature is one thing. Knowing why someone would want to enable it, is a whole different thing and the difference in good customer service. Another challenge is getting support people to better test and document potential bugs before escalating them to development. Trying to fix bugs without examples is always fun... NOT. They might as well say "The sky is falling, please fix it!"We need to over train the support staff about product changes and continually stress how they document and test potential product bugs. You also have to train the sales staff and the marketing team. Then there is updating sales materials, your website, product documentation and other items there are always out of date. Every product release causes this vicious circle of trying to educate the rest of the company about the changes.Do we need to record a simple video explaining the changes and email it to everyone? Maybe we should  use a simple online training type app to help with this problem. Ultimately the struggle is taking the time to do the training, but it is time well spent. It may save you a lot of time answering questions and fixing bugs later. How do we efficiently transfer key product knowledge from developers and product owners to the rest of the company? How have you solved these issues at your company?

    Read the article

  • How can we plan projects realistically while accounting for support issues?

    - by Thomas Clayson
    We're having a problem at work: we're trying to schedule work so that we can assess time scales and get deadline dates. The problem is that it's difficult to plan for a project without knowing everything that's going to happen. For instance, right now we've planned all our projects through the start of December, however in that time we will have various in house and external meetings, teleconferences and extra work. It's all well and good to say that a project will take three weeks, but if there is a week's worth of interruption in that time then the date of completion will be pushed back a week. The problem is 3 fold: When we schedule projects the time scales are taken literally. If we estimate three weeks, the deadline is set for three week's time, the client is told, and there is no room for extension. Interim work and such means that we lose productive time working on the project. Sometimes clients don't have the time that we need to take to do the work, so they'll sometimes come to us and say they need a project done by the end of the month even when we think that the work will take two months - not to mention we already have work to be doing. We have a Gantt chart which we are trying to fill in with all the projects we have and we fill in timesheets, but they're not compared to the Gantt chart at all. This makes it difficult to say "Well, we scheduled 3 weeks for this project, but we've lost a week here so the deadline has to move back a week." It's also not professional to keep missing deadlines we've communicated to the client. How do other people deal with this type of situation? How do you manage the planning of projects? How much "extra" time do you schedule into a project to account for non-project work that occurs during a project? How do you deal with support issues and bugs and stuff? Things you can't account for during planning? UPDATE Lots of good answers thank you.

    Read the article

  • Merging multiple top-level domains into a single domain

    - by user23089
    My client had multiple top-level-domains. Each one represented an insurance program within a specific vertical. For all the sites at these alternate domains, there was a 30/70 mix of duplicate vs. original content. Some of the alternate domains ranked very well for their target keyphrase groups, where others were absent in results pages. We advised the client to merge multiple domains into their existing main domain, for usability and SEO reasons. We recently ran the merger. Here was our process: On the main domain, transfer the content such that it matches 1-for-1 content on the various alternate domains Setup Google Webmaster Tools on the main domain Push the new content on the main domain live and submit a corresponding sitemap to Google Establish 301 redirects on the alternate domains, such that each alternate domain URL points to its respective page on the main domain We did this 12 days ago, and pages (previously on the alternate domains) that had ranked well on Google have now plummeted or are entirely non-existent. Did we do the right thing by merging multiple top-level domains into a single domain? Is this initial dip in rankings normal? How soon should we expect to see it return to its normal rankings?

    Read the article

  • JCP EC Nomination Materials for 2012

    - by heathervc
    The nomination period of the 2012 Annual JCP EC Elections will begin at the end of September 2012.  The JCP will be accept self-nominations for 2 seats on what will become the merged JCP EC, starting 28 September, with the nomination period ending on Thursday, 11 October. JCP Members (JSPA 2 primary contacts) will receive messages with instructions for nominating and their login credentials via email.  You will need this credential information to login and complete the nomination.The JCP EC Special Election schedule is posted online in the JCP calendar, highlights are below:Nominations for elected seats: 28 September-11 OctoberBallot (ratified and elected): 16-29 OctoberNew members take office: 13 November The ballot with nominees for ratified and nominated seats begins on 16 October. The results will also be available on jcp.org on 30 October. If you are attending JavaOne 2012 in San Francisco, there are several events happening that you may be interested in attending, in particular the following BOF session.Meet the JCP Executive Committee CandidatesSession ID: BOF6307Location: Hilton San Francisco - Golden Gate 3/4/5Date and Time: 10/2/12, 4:30 PM - 5:15 PM We will also be hosting a call for all of the candidates following the nomination period.  The following information is required for self-nomination.1) Contact information/Biography Each EC seat is represented by two people - a primary and alternate representative. Provide the following information for each representative: - Name - Title - Email Address - Mailing Address - Phone Number - Fax - A brief biography (3-5 sentences/~100-200 words) for primary contact - Photograph (prefer jpg format, head only shot) for primary contactBios and photos for the EC members are posted here:http://jcp.org/en/press/news/ec-feature_MEhttp://jcp.org/en/press/news/ec-feature_SE2) Qualification StatementA brief (2-3 paragraph) description of your qualifications for an EC seat; this is a Qualification Statement for the organization you represent. It should include the value and perspective you bring to the EC, your interests in the JCP program, as well as a summary of your current participation or planned participation in the JCP program (your entire organization)--JSRs, participation on Expert Groups, meetings/events attended, etc.  This statement will appear on the ballot and will convince community members that they should vote for you, so please include relevant information about your experience within the JCP program and your investments in Java technology.A few sample qualification statements are available here.3) Position PaperOne of the pieces of information we make available to the JCP membership for voting purposes is a position paper.  If you would like to provide this type of information for the ballot, please prepare in pdf format for posting.  This would be more detail on areas that you would put focus into during your tenure on the JCP EC.You can read more about some of the topics under discussion in the EC here, including links to JCP.Next materials. If you have an interest in participating in the JCP EC, please start preparing these materials now.  We look forward to a successful election process.

    Read the article

  • iptables forward rule not working in openwrt

    - by Udit Gupta
    I am trying to apply some iptables forwarding rules in openwrt. Here is my scenario - My server has two cards ath0 and br-lan. br-lan is connected to internet and ath0 to private network. The other m/c in n/w also has ath0 that connects with this server's ath0 and they are able to ping each other. Now, I want other m/c in network to use internet using br-lan of server so I thought of using iptables forwarding rule- Here is what I tried - Server : $ ping 1.1.1.6 // <ath0-ip of client> works fine $ iptables -A FORWARD -i ath0 -o br-lan -j ACCEPT $ /etc/init.d/firewall restart Client : $ ping 1.1.1.5 // <ath0-ip of server> works fine $ ping 132.245.244.60 // <br-lan ip of server> (not working) I am new to iptables stuff and openwrt. What I am doing wrong here ?? Any other help if anyone could suggest for my scenario Edit- m/c - machine n/w - network

    Read the article

  • [MISC GEEKERY] How To Easily Access Your Home Network From Anywhere With DDNS

    - by YatriTrivedi
    Whether you’re hosting a web page or running a Minecraft server, it’s a pain to keep track of IP addresses. Using a free dynamic DNS, you can turn 174.45.19.242 into mygeekydns.dyndns.org and be free from changing IPs.How To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)Learn How to Make HDR Images in Photoshop or GIMP With a Simple Trick

    Read the article

  • SOHO NETGEAR wireless router disconnects when downloading torrents

    - by Lirik
    I have a NETGEAR WGT624 wireless router at home which dies when there is a heavy torrent load. I open up my torrent client and it downloads for about 5 to 10 minutes and it continues to increase the number of seeds (goes up to 70-80 seeds), but after that the router simply fails and I have to restart it in order to get an internet connection again. When I connect directly via an ethernet cable the router and open up the torrent client, then it seems to be doing fine, but when I go wireless then the router stops working properly (although all the lights are still blinking as normal). Is there any way that I can fix this? New router firmware? Change some router options? Feed it a cookie? Anything?

    Read the article

  • PowerShell One Liner: Duplicating a folder structure in a Sharepoint document library

    - by Darren Gosbell
    I was asked by someone at work the other day, if it was possible in Sharepoint to create a set of top level folders in one document library based on the set of folders in another library. One document library has a set of top level folders that is basically a client list and we needed to create the same top level folders in another library. I knew that it was possible to open a Sharepoint document library in explorer using a UNC style path and that you could map a drive using a technique like this one: http://www.endusersharepoint.com/2007/11/16/can-i-map-a-document-library-as-a-mapped-drive/. But while explorer would let us copy the folders, it would also take all of the folder contents too, which was not what we wanted. So I figured that some sort of PowerShell script was probably the way to go and it turned out to be even easier than I thought. The following script did it in one line, so I thought I would post it here in my "online memory". :) dir "\\sharepoint\client documents" | where {$_.PSIsContainer} | % {mkdir "\\sharepoint\admin documents\$($_.Name)"} I use "dir" to get a listing from the source folder, pipe it through "where" to get only objects that are folders and then do a foreach (using the % alias) and call "mkdir".

    Read the article

  • 7u45 Caller-Allowable-Codebase and Trusted-Library

    - by costlow
    Java 7 update 45 (October 2013) changed the interactions between JavaScript and Java Applets made through LiveConnect. The 7u45 update is a critical patch update that has also raised the security baseline and users are strongly recommended to upgrade. Versions below the security baseline used to apply the Trusted-Library Manifest attribute to call between sandboxed code and higher-privileged code. The Trusted-Library value was a Boolean true or false. Security changes for the current security baseline (7u45) introduced a different Caller-Allowable-Codebase that indicates precisely where these LiveConnect calls can originate. For example, LiveConnect calls should not necessarily originate from 3rd party components of a web page or other DOM-based browser manipulations (pdf). Additional information about these can be located at “JAR File Manifest Attributes for Security.” The workaround for end-user dialogs is described on the 7u45 release notes, which explains removing the Trusted-Library attribute for LiveConnect calls in favor of Caller-Allowable-Codebase. This provides necessary protections (without warnings) for all users at or above the security baseline. Client installations automatically detect updates to the secure baseline and prompt users to upgrade. Warning dialogs above or below Both of these attributes should work together to support the various versions of client installations. We are aware of the issue that modifying the Manifest to use the newer Caller-Allowable-Codebase causes warnings for users below the security baseline and that not doing it displays a warning for users above. Manifest Attribute 7u45 7u40 and below Only Caller-Allowable-Codebase No dialog Displays prompt Only Trusted-Library Displays prompt No dialog Both Displays prompt (*) No dialog This will be fixed in a future release so that both attributes can co-exist. The current work-around would be to favor using Caller-Allowable-Codebase over the old Trusted-Library call. For users who need to stay below the security baseline System Administrators that schedule software deployments across managed computers may consider applying a Deployment Rule Set as described in Option 1 of “What to do if your applet is blocked or warns of mixed code.” System Administrators may also sign up for email notifications of Critical Patch Updates.

    Read the article

  • &ldquo;ASP.NET MVC 2 in Action&rdquo; Ebook is complete

    - by Brian Schroer
    I just got email notification that ASP.NET MVC2 in Action is complete. I had signed up for the Manning Early Access Program (MEAP), which allowed me to reserve a hardcopy of the book, a PDF of the completed chapters, and the PDF of the entire version 1 (ASP.NET MVC in Action) book all for $49.99. I’m working on my first MVC application, and it’s been a big help so far. Congratulations to Jeffrey Palermo, Ben Scheirman, Jimmy Bogard, Eric Hexter, and Matthew Hinze for completing what looks like a great book!

    Read the article

  • Xfce gets really confused about session saving, etc

    - by Pointy
    I'm getting a new laptop running with 11.04 Ubuntu. I've got the xfce4 packages all installed, which is something I've had no problems with on any of my other machines. On this new laptop, however, though I can log in and use an xfce session without any problems, logging out of a session is problematic: I click the "Log out" widget from the panel and then "Log out" from its option dialog. Then the thing just sits there, not logging out. Subsequent attempts to open the "Log out" widget fail with an error about the session manager being busy. After maybe a minute or so, it logs out. Though I've got the "Save session" option checked in the log out dialog, xfce just makes a complete hash of the business. It does remember the applications that I had running, but it seems to forget about the window manager (!!) and the workspace configuration. I don't log in/out that often, and generally I don't care much about restarting applications, but the window manager being missing is of course pretty annoying. I like xfce because it's simple and unobtrusive and usually works pretty well. I've never experienced this, and I've got two other machines also running 11.04 with pretty much the same setup (straight Ubuntu install with xfce4 packages added). Is there some good way to diagnose stuff like that? edit — well I nuked my session cache, did an explicit save from the session widget, and now it works. Well, it doesn't save the workspace location for each client and instead opens them all up on the first workspace, but I think that may be because, in the session, xfwm4 is the last thing in the "Client" list, so before it's started all the other clients just pile up in the first (and only) workspace. I'm still curious about how exactly it gets so messed up. I certainly wasn't knowingly attempting anything fancy or unorthodox, though I may have done something fishy inadvertently.

    Read the article

  • Naming a class that decides to retrieve things from cache or a service + architecture evaluation

    - by Thomas Stock
    Hi, I'm a junior developer and I'm working on a pet project that I want to learn as much as possible from. I have the following scenario: There's a WCF service that I use to retrieve and update data, lets say Cars. So it's called CarWCFService and has a GetCars(), SaveCar(), ... . It implements interface ICarService. This isn't the Actual WCF service but more like a wrapper around it. Upon retrieving data from the service, I want to store them in local memory, as cache. I have made a class for this called CarCacheService which also implements interface ICarService. (I will explain later why it implements ICarService) I don't want client code to be calling these implementations. Instead, I want to create a third implementation for ICarService that tries to read from the CarCacheService before calling the WCFCarService, stores retrieved data in the CarCacheService, etc. 3 questions: How do I name this third class? I was thinking about something as simple as CarService. This does not really says what the service does exactly, tho. Is the naming for the other classes good? Would this naming and architecture be obvious for future programmers? This is my biggest concern. Does this architecture make sense? The reason that I implement ICarService on the CarCacheService is mainly because it allows me to fake the WCFService while debugging. I can store dummy data in a CarCacheService instance and pass it to the CarService, together with an(other) empty CarCacheService. If I made CacheCarService and WCFService public I could let client code decide if they want to drop the caching and just work directly on the WCFService.

    Read the article

  • Latest Patch Set Updates for EPM

    - by Lia Nowodworska - Oracle
    Here is the list of the latest Patch Set Updates for EPM products: The "Patch" ID links will access the patch directly for download from "My Oracle Support" (login required). Patch 18490422  for Hyperion Financial Management 11.1.2.2.307 Patch 18685108  for Oracle Hyperion Profitability and Cost Management release 11.1.2.3.501 Patch 18400594  for Hyperion Strategic Finance 11.1.2.3.501 Patch 18505475 for Hyperion Essbase Admin Services Server 11.1.2.3.501  Patch 18505468 for Hyperion Essbase Admin Services Console MSI 11.1.2.3.501  Patch 18505499 for Hyperion Essbase RTC 11.1.2.3.501  Patch 18505489 for Hyperion Essbase Server 11.1.2.3.501  Patch 18505494 for Hyperion Essbase Client 11.1.2.3.501  Patch 18505483 for Hyperion Essbase Client MSI 11.1.2.3.501  Patch 18505515 for Hyperion Analytic Provider Services 11.1.2.3.501 Patch 18505506 for Hyperion Essbase Studio Server 11.1.2.3.501 Patch 18505503 for Hyperion Essbase Studio Console MSI 11.1.2.3.501 For the latest Enterprise Performance Management Patch Set Updates visit: Oracle Hyperion EPM Products [Doc ID 1400559.1]

    Read the article

  • OSX: iTunes won't play .ogg files over shared library

    - by Decavolt
    I have an OSX 10.5 box that is sharing its iTunes library and another 10.6 box connecting to that library through iTunes. It works as expected except for ogg files. Those ogg files are visible and selectable in the shared library but will not play through the share on the client machine. If I move the ogg files to the client machine they play locally just fine. They also play locally on the host machine. They just won't play through the iTunes share. Is this a bug or some other known issue? Is there a fix, short of converting all ogg files to mp3?

    Read the article

  • firehol (firewall) with bridge: how to filter

    - by Leon
    I have two interfaces: eth0 (public address) and lxcbr0 with 10.0.3.1. I have a LXC guest running with ip 10.0.3.10 This is my firehol config: version 5 trusted_ips=`/usr/local/bin/strip_comments /etc/firehol/trusted_ips` trusted_servers=`/usr/local/bin/strip_comments /etc/firehol/trusted_servers` blacklist full `/usr/local/bin/strip_comments /etc/firehol/blacklist` interface lxcbr0 virtual policy return server "dhcp dns" accept router virtual2internet inface lxcbr0 outface eth0 masquerade route all accept interface any world protection strong #Outgoing these protocols are allowed to everywhere client "smtp pop3 dns ntp mysql icmp" accept #These (incoming) services are available to everyone server "http https smtp ftp imap imaps pop3 pop3s passiveftp" accept #Outgoing, these protocols are only allowed to known servers client "http https webcache ftp ssh pyzor razor" accept dst "${trusted_servers}" On my host I can connect only to "trusted servers" on port 80. In my guest I can connect to port 80 on every host. I assumed that firehol would block that. Is there something I can add/change so that my guest(s) inherit the rules of the eth0 interface?

    Read the article

  • TightVNC grey screen?

    - by gary
    I'm trying to help my mom remotely with some PC problems. She's not too computer savvy, so to keep the firewall stuff on my side of things, I tried to use a reverse VNC connection: On my machine, I set up TightVNC client in listening mode. I also opened TCP port 5500 on my router and firewall, and checked it using http://canyouseeme.org/. On her machine, I (instructed her to) set up TightVNC server, and connect to my machine's IP ('Add New Client...'). Both machines run Windows XP & TightVNC 1.3.10. The problem: When she tries to connect, a TightVNC window with grey background pops up on my machine, but I never get to see the remote desktop. It just remains grey. However, it seems that I control the mouse on the remote side (she says it's moving). I tried to reverse-connect from another machine on my LAN and it works without a problem. Any idea what the problem could be?

    Read the article

  • June 17, 2010 Webcast - 5 Security Tips To Reduce Cost Using Oracle Directory Services

    - by mark.wilcox
    We're delivering another webcast on June 17 (next week!): 5 Security Tips To Reduce Cost Using Oracle Directory Services  Organizations with business units spread around the world face costly and time consuming security concerns. However, many of these companies are forced to deal with increased scrutiny and security demands while resources are reduced. This live webcast focuses on concrete ways IT organizations can use directory services to do more with less.  Posted via email from Virtual Identity Dialogue

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • notation of Path to files/folders/drives that is shared on a network in windows?

    - by claws
    Hello, When some thing shared on network using windows network share option. Some people use path like \\something\something\something$ I'm don't know if this is correct way or not. but as far as I remember there is a dollar sign. Can any one please tell me. What is this notation? Where can I find more details about this? What is samba server/sharing? I don't understand when people use it. Is it something related to Linux? EDIT I'm a programmer. I guess this file sharing on network using windows uses client server architecture. I want to know what is this server on windows called? What protocol does it use? client is of course our windows explorer.exe? Which service in services.msc is responsible for this?

    Read the article

  • WebCenter Customer Spotlight: Los Angeles Department of Water and Power

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution Summary Los Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. The goal of the project was to implement a newly designed web portal to increase customer self-service while reducing transactions via IVR and automate many of the paper based processes to web based workflows for their 1.6 million customers. LADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content and Oracle SOA Suite for the integration of their complex back-end systems infrastructure. The new portal has received extremely positive feedback from not only the customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Company OverviewLos Angeles Department of Water and Power (LADWP) is the largest public utility company in the United States with over 1.6 million customers. LADWP provides water and power for millions of residential & commercial customers in Southern California. LADWP also bills most of these customers for sanitation services provided by another department in the city of Los Angeles.  Business ChallengesThe goal of the project was to implement a newly designed web portal that is easy to navigate from a web browser and mobile devices, as well as be the platform for surfacing internet and intranet applications at LADWP. The primary objective of the new portal was to increase customer self-service while reducing the transactions via IVR and walk-up and to automate many of the paper based processes to web based workflows for customers. This includes automation of Self Service implemented through My Account (Bill Pay, Payment History, Bill History, Usage analysis, Service Request Management) Financial Assistance Programs Customer Rebate Programs Turn Off/Turn On/Transfer of Services Outage Reporting eNotification (SMS, email) Solution DeployedLADWP implemented a Self Service Portal using Oracle WebCenter Portal & Oracle WebCenter Content. Using Oracle SOA Suite they integrated various back-end systems including Oracle Siebel CRM IBM Mainframe based CIS FILENET for document management EBP Eletronic Bill Payment System HP Imprint System for BillXML data Other systems including outage reporting systems, SMS service, etc. The new portal’s features include: Complete Graphical redesign based on best practices in UI Design for high usability Customer Self Service implemented through MyAccount (Bill Pay, Payment History, Bill History, Usage Analysis, Service Request Management) Financial Assistance Programs (CRM, WebCenter) Customer Rebate Programs (CRM, WebCenter) Turn On/Off/Transfer of services (Commercial & Residential) Outage Reporting eNotification (SMS, email) Multilingual (English & Spanish) – using WebCenter multi-language support Section 508 (ADA) Compliant Search – Using WebCenter SES (Secured Enterprise Search) Distributed Authorship in WebCenter Content Mobile Access (any Mobile Browser) Business ResultsThe new portal has received extremely positive feedback from not only customers and users of the portal, but also other utilities. At Oracle OpenWorld 2012, LADWP won the prestigious WebCenter innovation award for their innovative solution. Additional Information LADWP OpenWorld presentation Oracle WebCenter Portal Oracle WebCenter Content Oracle SOA Suite

    Read the article

  • connecting to freenx server : configuration error

    - by Sandeep
    I am not able to pin point what is missing. I have configured freenx-server (useradd, passwd etc). However, server drops the connection after authentication. Please note my server is ubuntu 10.04 and client recent version. Below is the error log NX> 203 NXSSH running with pid: 6016 NX> 285 Enabling check on switch command NX> 285 Enabling skip of SSH config files NX> 285 Setting the preferred NX options NX> 200 Connected to address: 192.168.2.2 on port: 22 NX> 202 Authenticating user: nx NX> 208 Using auth method: publickey HELLO NXSERVER - Version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0) NX> 105 hello NXCLIENT - Version 3.2.0 NX> 134 Accepted protocol: 3.2.0 NX> 105 SET SHELL_MODE SHELL NX> 105 SET AUTH_MODE PASSWORD NX> 105 login NX> 101 User: sandeep NX> 102 Password: NX> 103 Welcome to: ubuntu user: sandeep NX> 105 listsession --user="sandeep" --status="suspended,running" --geometry="1366x768x24+render" --type="unix-gnome" NX> 127 Sessions list of user 'sandeep' for reconnect: Display Type Session ID Options Depth Screen Status Session Name ------- ---------------- -------------------------------- -------- ----- -------------- ----------- ------------------------------ NX> 148 Server capacity: not reached for user: sandeep NX> 105 startsession --link="lan" --backingstore="1" --encryption="1" --cache="16M" --images="64M" --shmem="1" --shpix="1" --strict="0" --composite="1" --media="0" --session="t" --type="unix-gnome" --geometry="1366x768" --client="linux" --keyboard="pc105/us" --screeninfo="1366x768x24+render" ssh_exchange_identification: Connection closed by remote host NX> 280 Exiting on signal: 15

    Read the article

  • How can i control how much Authority RRs a DNS answer returns ?

    - by Benoît
    Hello, i quite don't understand Authority RRs. Using dig to resolve some random A records, my caching DNS server sometimes also return Authority RRs and sometimes not. The number and the type of the returned Authority RRs also vary a lot. What does control whether Authority RRs are returned or not on a standard client query ? Is it a client or server setting ? What does control how many Authoriy RRs are returned ? What does control of the type of the Authority RRs records returned ? Sometimes it's multiple NS, sometimes it's a SOA. It's also often comes with Additional RRs of type A about the Authority RRs NS returned. I'm really looking for a RFC or kind of describing this. Thanks.

    Read the article

  • How to Never Use iTunes With Your iPhone, iPad, or iPod Touch

    - by Chris Hoffman
    iTunes isn’t an amazing program on Windows. There was a time when Apple device users had to plug their devices into their PCs or Macs and use iTunes for device activation, updates, and syncing, but iTunes is no longer necessary. Apple still allows you to use iTunes for these things, but you don’t have to. Your iOS device can function independently from iTunes, so you should never be forced to plug it into a PC or Mac. Device Activation When the iPad first came out, it was touted as a device that could replace full PCs and Macs for people who only needed to perform light computing tasks. Yet, to set up a new iPad, users had to plug it into a PC or Mac running iTunes and use iTunes to activate the device. This is no longer necessary. With new iPads, iPhones, and iPod Touches, you can simply go through the setup process after turning on your new device without ever having to plug it into iTunes. Just connect to a Wi-Fi or cellular data network and log in with your Apple ID when prompted. You’ll still see an option that allows you to activate the device via iTunes, but this should only be necessary if you don’t have a wireless Internet connection available for your device. Operating System Updates You no longer have to use Apple’s iTunes software to update to a new version of Apple’s iOS operating system, either. Just open the Settings app on your device, select the General category, and tap Software Update. You’ll be able to update right from your device without ever opening iTunes. Purchased iTunes Media Apple allows you to easily access content you’ve purchased from the iTunes Store on any device. You don’t have to connect your device to your computer and sync via iTunes. For example, you can purchase a movie from the iTunes Store. Then, without any syncing, you can open the iTunes Store app on any of your iOS devices, tap the Purchased section, and see stuff you’ve downloaded. You can download the content right from the store to your device. This also works for apps — apps you purchase from the App Store can be accessed in the Purchased section on the App Store on your device later. You don’t have to sync apps from iTunes to your device, although iTunes still allows you to. You can even set up automatic downloads from the iTunes & App Store settings screen. This would allow you to purchase content on one device and have it automatically download to your other devices without any hassle. Music Apple allows you to re-download purchased music from the iTunes Store in the same way. However, there’s a good chance you have your own music you didn’t purchase from iTunes. Maybe you spent time ripping it all from your old CDs and you’ve been syncing it to your devices via iTunes ever since. Apple’s solution for this is named iTunes Match. This feature isn’t free, but it’s not a bad deal at all. For $25 per year, Apple allows you to upload all your music to your iCloud account. You can then access all your music from any iPhone, IPad, or iPod Touch. You can stream all your music — perfect if you have a huge library and little storage on your device — and choose which songs you want to download to your device for offline use. When you add additional music to your computer, iTunes will notice it and upload it using iTunes Match, making it available for streaming and downloading directly from your iOS devices without any syncing. This feature is named iTunes Match because it doesn’t just upload music — if Apple already has a song you upload, it will “match” your song with Apple’s copy. This means you may get higher-quality versions of your songs if you ripped them from CD at a lower bitrate. Podcasts You don’t have to use iTunes to subscribe to podcasts and sync them to your devices. Even if you have a lowly iPod Touch, you can install APple’s Podcasts app from the app store. Use it to subscribe to podcasts and configure them to automatically download directly to your device. You can use other podcast apps for this, too. Backups You can continue backing up your device’s data through iTunes, generating local backups that are stored on your computer. However, new iOS devices are configured to automatically back up their data to iCloud. This happens automatically in the background without you even having to think about it, and you can restore such backups when setting up a device simply by logging in with your Apple ID. Personal Data In the days of PalmPilots, people would use desktop programs like iTunes to sync their email, contacts, and calendar events with their mobile devices. You probably shouldn’t have to sync this data form your computer. Just sign into your email account — for example, a Gmail account — on your device and iOS will automatically pull your email, contacts, and calendar events from your associated account. Photos Rather than connecting your iOS device to your computer and syncing photos from it, you can use an app that automatically uploads your photos to a web service. Dropbox, Google+, and even Flickr all have this feature in their apps. You’ll be able to access your photos from any computer and have a backup copy without any syncing required. You may still need to use iTunes if you want to sync local music without paying for iTunes Match or copy local video files to your device. Copying large local files over is the only real scenario where you’d need iTunes. If you don’t need to copy such files over, you can go ahead and uninstall iTunes from your Windows PC if you like. You shouldn’t need it.     

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >