Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 253/976 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • Apparent leak in Mozilla Firefox

    - by LeopardSkinPillBoxHat
    I use Mozilla Firefox 3.6 all day, opening and closing tabs quite regularly. I am noticing over time that the firefox.exe process size keeps growing and growing over time. Initially I put this down to memory fragmentation caused by opening and closing tabs, but now I am suspecting that there is a memory leak in one of the add-ons that I have installed. The problem I am seeing is that when the process size gets to about 1.5GB in the "Mem Usage" stat in Task Manager (and it gets there quite regularly), Firefox freezes up. Does anyone have any ideas about how I could diagnose whether: Any of the add-ons are leaking memory? Something else is causing this problem?

    Read the article

  • XNA extending the existing Content type

    - by Maarten
    We are doing a game in XNA that reacts to music. We need to do some offline processing of the music data and therefore we need a custom type containing the Song and some additional data: // Project AudioGameLibrary namespace AudioGameLibrary { public class GameTrack { public Song Song; public string Extra; } } We've added a Content Pipeline extension: // Project GameTrackProcessor namespace GameTrackProcessor { [ContentSerializerRuntimeType("AudioGameLibrary.GameTrack, AudioGameLibrary")] public class GameTrackContent { public SongContent SongContent; public string Extra; } [ContentProcessor(DisplayName = "GameTrack Processor")] public class GameTrackProcessor : ContentProcessor<AudioContent, GameTrackContent> { public GameTrackProcessor(){} public override GameTrackContent Process(AudioContent input, ContentProcessorContext context) { return new GameTrackContent() { SongContent = new SongProcessor().Process(input, context), Extra = "Some extra data" // Here we can do our processing on 'input' }; } } } Both the Library and the Pipeline extension are added to the Game Solution and references are also added. When trying to use this extension to load "gametrack.mp3" we run into problems however: // Project AudioGame protected override void LoadContent() { AudioGameLibrary.GameTrack gameTrack = Content.Load<AudioGameLibrary.GameTrack>("gametrack"); MediaPlayer.Play(gameTrack.Song); } The error message: Error loading "gametrack". File contains Microsoft.Xna.Framework.Media.Song but trying to load as AudioGameLibrary.GameTrack. AudioGame contains references to both AudioGameLibrary and GameTrackProcessor. Are we maybe missing other references?

    Read the article

  • How to install compat-drivers or compat-wireless

    - by Sasho
    Could someone please explain me in detail how to install some of this drivers. I am running Ubuntu 12.04.2 and have the infamous (as I see everywhere in the net) problem with Atheros AR9462 wifi card. I tried everything to fix it and compat drivers are the only solution that I haven't tried. I tried to install some random package from kernel.org site and it couldn't make the driver (it threw some error). Then I updated the kernel to 3.10-rc7 and downloaded the last release of compat drivers and again the same problem occured. I reinstalled Ubuntu 12.04.2 and now I am using the 3.5 kernel because I don't know if rc7 is stable version. So my question is which compat wireless or compat drivers to download for this kernel and what is the process of installing. I tried with some command from the repository and it returned that it's not found. PS I am new to Ubuntu and Linux in general so explaining at length the install process and which driver I should install would be appreciated.

    Read the article

  • Does waterfall require code complete before QA steps in?

    - by P.Brian.Mackey
    The process used at a certain company consists of: Create a layout according to some designs made in a web page design tool. (CSS, html) Requirements come in with "functional requirements". These consist of 100's of lines of business directions. E.G. Create a Table on page X. Column1 has numeric data. Column1 is the client code. Column2 is a string...etc. Write code to meet all functional requirements. When all code is checked in, send to QA (which is the BA that wrote the requirements) for inspection, bug finds and change requests. Punt back to developer with a list of X bugs and Y change requests. While bug finds or change requests 0 go to step 4. The agile development environments I have worked in allow, if not demand, early QA inspection and early user acceptance. So, pieces of the program can be refined and redefined before the entire application is in place. Not only that, but the process leaves little room for error or people changing their minds. Instead, those "change requests" come in at the last stage when they do the most damage. And being that a bug-fix's cost increases over time, this is a costly way to write code. I am no waterfall expert. As described, is this waterfall being mishandled in some way? How does waterfall address my concerns?

    Read the article

  • How to debug why w3wp.exe crashes randomly?

    - by sassyboy
    On the main production server, the IIS worker process crashes sometimes. From the event viewer I get the following information. Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: KERNELBASE.dll, version: 6.1.7601.17651, time stamp: 0x4e211319 Exception code: 0xe053534f Fault offset: 0x0000b9bc Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 This happens randomly on the prod server and I have not been able to recreate this crash anywhere else. This was happening on IIS 6, and we recently moved to Windows Server 2008 and IIS 7.5 and the crash happens there as well. How to go about finding the root cause of this?

    Read the article

  • IMAP can't access virtual account sharing name with local user account

    - by chernevik
    I am setting up a postfix/dovecot mail server with virtual accounts, per the Chris Haas tutorial. I'm finding that virtual users who also have local user accounts on the mail server cannot access their email remotely via IMAP. They're told they cannot login. (I'm using Thunderbird for that). These same users can login when emulating IMAP locally via telnet. Virtual users without local accounts have no trouble with IMAP access from remote clients. These local user accounts have vestiges of prior efforts in their home directories: mbox files, Mail and mail directories. I've looked at the logs for clues to where the remote login process is failing (dovecot authentication failure? confusion over where emails are stored?) but found nothing helpful. I haven't found much in the dovecot or postfix documentation that describes the IMAP login process and expectations in enough detail to help me diagnose this. So: how do I go about identifying the problem and researching a solution?

    Read the article

  • MIA

    - by Robert May
    So, I’ve been missing in action on this blog for quite some time.  I need to rectify that. Part of the reason I’ve been absent is because I haven’t be able to talk about what I’m working on.  A former client watches my blog rather closely, and although we accomplished many good things together, their culture is such that they really don’t like people to freely express their thoughts (you’ll note my blog posts stopped rather abruptly).  I learned some really important lessons about Agile in the last 3 years, and I think its worthwhile to talk about them.  Sometimes things worked really well, sometimes, they failed failed.  Sometimes that failure was me, sometimes it wasn’t. I understand Agile better now, and hopefully, what I have to say will guide others through this process and help others understand Agile better. One thing that I’ve learned is that MANY companies that say they are doing Agile are NOT really doing Agile.  To often, they pick the things they like and don’t follow the process long enough to know what rules they can break, and which ones they shouldn’t.  This is probably the primary reason why Agile fails. So, expect more posts, especially as I’m flying coast to coast. :)

    Read the article

  • Memory consumption of each accept() call on server running on Windows 2008 [migrated]

    - by Atul
    I've written a simple and small server application on Windows 2008 that just accepts connections and does nothing else. I am doing memory footprint assessment of socket calls, What I found that each connection (after accept()) consumes at least 2.5 KB of memory. Interestingly, the memory is not consumed by the process that has accept() call but it consumed by a OS process. I believe it might be because of data structures being created inside OS for each connection. Now, I have two key questions: Is it possible by any means to reduce this memory footprint (by changing any parameters, configuration etc) ? If yes how ? (Because 2K for each connection would be too much if we planning server to accept millions of connections) If my server is intended to accept million connections, is it good idea to use Windows 2008 ? or shall I switch to some other OS? Please advice me.

    Read the article

  • Partner Webcast: Implementing on SOA - A Hands-On Technology Demonstration

    - by Thanos
    Service Oriented Architecture enables organizations to operate more efficiently and react faster to opportunities. How? By helping you create a flexible application architecture that supports greater business agility. You decide how quickly you want to move. You can start by implementing an application integration platform. Then, you can evolve your environment gradually by introducing business process management, business rules, governance and event processing. This unified but flexible approach also allows you to maximize the long-term cost reduction benefits of SOA and cloud-based applications. In this session, you dive into SOA Suite and you will see the usage of some advanced features. The topics covered range from adapters, automatic and custom business process correlation through service routing, rule based and manual decisions and to error handling, compensations and extending SOA Suite with your own Java code. Agenda: Service Oriented Architecture The Auctions Scenario Live Demo of the Oracle SOA Suite Features Connecting to non service enabled technologies with adapters (Database and File adapter) Orchestrating services with BPEL processes Correlating processes with correlation sets Mediating services Service Component Architecture Event Handling User Notification Human Workflow Business Rules Fault Handling patterns Developing custom components with Spring and using them in SOA Suite composites Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now For all your questions and support requests to adopt and implement the latest Oracle technologies please contact us at [email protected]

    Read the article

  • Doesn't installing "All locales" install necessary fonts too?

    - by its_me
    I recently noticed that my browsers rendered blank text (or invisible text?) on some websites in foreign languages, like Chinese. inside.com.tw, for example. Later I learnt that by default Debian only installs one locale (the one you choose during the installation process), and others need to be installed manually. So, I ran the command: # dpkg-reconfigure locales And selected All locales from the options screen that followed, and proceeded with the rest of the process, which also includes changing the default locale (which I set to en_US.UTF-8). Then I restarted my system. I still can't read the website that I mentioned earlier (inside.com.tw). Most of the text is blank, i.e. invisible. With the page translated by Chrome to my default language (en_US), the text is visible; BUT not in the original language. Why is this happening? Does this mean that installing locales isn't actually necessary, and all I have to do is install the fonts for all supported languages? If so, how do I install all the fonts necessary for All locales? UPDATE: An easy fix is to install the unifont package which adds support for all Unicode 5.1 characters. But the rendering is of very bad quality. So, how I install all font packages? I notice that there are three sets, ones starting with fonts-*, another with xfonts-*, and ttf-*? Which set should I exactly go with, and how do I install that set of fonts. Looking for a knowledgeable solution.

    Read the article

  • Sharepoint Services 3.0: 403 Forbidden fun

    - by gravyface
    Can't get to the Administration or the "companyweb" site itself; was working up to a week ago. Old threads, blog posts, etc. indicate that there was an issue with a KB update but was resolved when .NET Framework 2.0 SP1 was deployed/installed. Running Process Monitor, I can see a lot of PATH NOT FOUND','NAME NOT FOUND for c:\inetpub\companyweb\Default.aspx, \_themes\ice\...\foo.css, etc. for the w3wp.exe process on CreateFileorQueryOpenoperations. These files do not exist in the location specified. I don't recall these files actually existing in that folder, but I believe they're "created" when requested, pulled in from Common Files/Shared or whatever, in typically-awesome Microsoft Web architecture land (</rant>). Besides reinstalling (which I'm sure will be as much fun as migrating from one server to another was), anyone know what's going on? Google-fu has alluded me.

    Read the article

  • Using Lighttpd: apache proxy or direct connection?

    - by Halfgaar
    Hi, I'm optimizing a site by using lighttpd for the static media. I've found that a recommended solution is to use Apache Proxy to point to the lighttpd server. But, does that use up an Apache thread/process per request? In my setup, I've noticed that all my processes are used up, even though they aren't doing anything, CPU wise. To free up apache processes, I've configured lighttpd and the amount of processes needed is lowered significantly, Munin shows. However, I've set it up to connect directly to lighty, to prevent apache workers from being occupied by serving static media. My question is: when using Apache Proxy, does that also use up a process/worker per request?

    Read the article

  • Conditional attribute in XML - most concise solution?

    - by Lech Rzedzicki
    I am tasked with setting up conditional profiling - a method of tagging chunks of XML with an attribute, which will then be used as a conditional value to extract subset of that XML. Have a look at another definition/example: DITA profiling The XML is documents that are equivalent to printed books - i.e. documents that are often looked at by a human, even if indirectly. Therefore I am looking at a few requirements here: 1. keeping the value list brief - so it doesn't affect the readability of the document 2. be able to process with standard XML tools - a space-separated list inside an attribute is still probably fine, but I'd rather not use too much regexp for this 3. be obvious for various users, including 3rd parties, which content goes where 4. Be easy to maintain going forward Therefore one easy solution is: The problem with this: 1. As the list grows the value of the attribute can be a bit verbose 2. One needs to explicitly state every value even if it's a scenario of this vs everything else Therefore I am also looking at other approaches such as: 1. Using + and - modifiers, Apache htaccess style to override the default cascading of profiling - by default all content goes everywhere and if we want to exclude a bit we just say "-kindle". It does require parsing the whole tree, is not supported by editing tools and one needs to regexp the attribute value a bit deeper... 2. Using an intermediate file to define groups of values such as "other" or "non-print", example of this in DITA. It allows concise XML as well as different grouping and values for each document but it does create a certain level of abstraction which may make it a little less obvious for a 3rd party? Altogether, if you received such XML and were tasked to process it, which option you'd rather receive? If you have any experiences like that, even in an unrelated areas such a builds, don't hesitate to comment!

    Read the article

  • Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX

    - by Asian Angel
    Are you tired of the same old wallpaper on your Ubuntu desktop? Now you can go from blah to literally spacious, real-time styled views of Earth with the xplanetFX Wallpaper App for Linux. You can conveniently access the “file type” downloads, screenshots, and jump-to links all on the front page. For our example we downloaded the .deb setup file on our system. The setup file will need to download three additional files to complete the setup process. After those are downloaded all dependencies will have been met and you can complete the installation process. Once that is done you can find xplanetFX by going to the Accessories Section of your Ubuntu Menu. This is what the main control window looks like when you start xplanetFX for the first time. You should take a few moments to look through the various tabs and tweak the settings for items like location, screen resolution, timing, auto-start, etc. When you are done click on Execute and within a few moments your desktop will have a fresh new look! Note: It took ~30 seconds for the display to activate on our system. Have fun with xplanetFX! xplanetFX Homepage [via OMG! Ubuntu!] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • Converting gzip files to bzip2 efficiently

    - by sundar
    I have a bunch of gzip files that I have to convert to bzip2 every now and then. Currently, I'm using a shell script that simply 'gunzip's each file and then 'bzip2's it. Though this works, it takes a lot of time to complete. Is it possible to make this process more efficient? I'm ready to take a dive and look into gunzip and bzip2's source codes if necessary, but I just want to be sure of the payoff. Is there any hope of improving the efficiency of the process?

    Read the article

  • Ganglia and how it communicates

    - by MikeKulls
    I'm a little confused about how Ganglia sends information around and have found conflicting information on the web. I would have thought that the gmond process would either send info to gmetad at a regular interval or gmetad would request info from the various instances of gmond. At least one online article states this is how it works but from what I understand this is incorrect. It appears that you configure all gmond processes to send their info to a central gmond process and gmetad will request info from that central gmond. Is that correct? In my case I have 6 servers sending their information to 1 central server. If I set gmetad to get it's information from the central server then I get information from all 6 servers, all good.. Their weird thing is that if I point gmetad to one of the 6 servers then I still get info from all 6 servers. How is it that server 1 in my cluster if getting stats from all the other servers?

    Read the article

  • How to open the Quota Entries list for a given drive from a shortcut or command line?

    - by Indrek
    Is it possible to open the Quota Entries list for a given drive from a shortcut or the command line in Windows? And if so, how? I'm not talking about managing quota entries from the command line (fsutil quota), I'm looking for a way to get directly to the Quota Entries GUI rather than going through My Computer → right-click on a drive → Properties → Quota tab → Show Quota Settings → Quota Entries... I managed to find out (using Process Explorer) that the backend process is dllhost.exe, and the DLL in question should be dskquoui.dll, so it should be possible to run it directly with rundll32.exe, but I'm not sure of the exact syntax. Any ideas?

    Read the article

  • korgac - init.d kill script on shutdown

    - by Max Magnus
    I'm new to Ubuntu 12.04 and Linux and my English is not the best, so I'm sorry for incorrect or stupid questions. I've installed KOrganizer and to start the reminder when I boot the system, I added the korgac command to the autostart. This works fine. But now, every time I want to reboot or shutdown my system, there appears a message that tells me that an unknown process is still running... so I have kill it manually before reboot/shutdown. I knew that it is the korgac process that causes this problem, so I decided to create an init.d script. I've created a script, put it into init.d, and created 2 symbolic links: to rc0.d and to rc6.d. The name starts with K10script... (I hope it is correct so). K10korgac_kill: #! /bin/sh pkill korgac exit 0 Unfortunately this wasn't able to resolve my problem. Maybe my script is wrong. I hope someone can help me. Thanks for your time Max

    Read the article

  • Monitor SQL Server Replication Jobs

    - by Yaniv Etrogi
    The Replication infrastructure in SQL Server is implemented using SQL Server Agent to execute the various components involved in the form of a job (e.g. LogReader agent job, Distribution agent job, Merge agent job) SQL Server jobs execute a binary executable file which is basically C++ code. You can download all the scripts for this article here SQL Server Job Schedules By default each of job has only one schedule that is set to Start automatically when SQL Server Agent starts. This schedule ensures that when ever the SQL Server Agent service is started all the replication components are also put into action. This is OK and makes sense but there is one problem with this default configuration that needs improvement  -  if for any reason one of the components fails it remains down in a stopped state.   Unless you monitor the status of each component you will typically get to know about such a failure from a customer complaint as a result of missing data or data that is not up to date at the subscriber level. Furthermore, having any of these components in a stopped state can lead to more severe problems if not corrected within a short time. The action required to improve on this default settings is in fact very simple. Adding a second schedule that is set as a Daily Reoccurring schedule which runs every 1 minute does the trick. SQL Server Agent’s scheduler module knows how to handle overlapping schedules so if the job is already being executed by another schedule it will not get executed again at the same time. So, in the event of a failure the failed job remains down for at most 60 seconds. Many DBAs are not aware of this capability and so search for more complex solutions such as having an additional dedicated job running an external code in VBS or another scripting language that detects replication jobs in a stopped state and starts them but there is no need to seek such external solutions when what is needed can be accomplished by T-SQL code. SQL Server Jobs Status In addition to the 1 minute schedule we also want to ensure that key components in the replication are enabled so I can search for those components by their Category, and set their status to enabled in case they are disabled, by executing the stored procedure MonitorEnableReplicationAgents. The jobs that I typically have handled are listed below but you may want to extend this, so below is the query to return all jobs along with their category. SELECT category_id, name FROM msdb.dbo.syscategories ORDER BY category_id; Distribution Cleanup LogReader Agent Distribution Agent Snapshot Agent Jobs By default when a publication is created, a snapshot agent job also gets created with a daily schedule. I see more organizations where the snapshot agent job does not need to be executed automatically by the SQL Server Agent  scheduler than organizations who   need a new snapshot generated automatically. To assure this setting is in place I created the stored procedure MonitorSnapshotAgentsSchedules which disables snapshot agent jobs and also deletes the job schedule. It is worth mentioning that when the publication property immediate_sync is turned off then the snapshot files are not created when the Snapshot agent is executed by the job. You control this property when the publication is created with a parameter called @immediate_sync passed to sp_addpublication and for an existing publication you can use sp_changepublication. Implementation The scripts assume the existence of a database named PerfDB. Steps: Run the scripts to create the stored procedures in the PerfDB database. Create a job that executes the stored procedures every hour. -- Verify that the 1_Minute schedule exists. EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorReplicationAgentsSchedules @CategoryId = 13; /* LogReader */ -- Verify all replication agents are enabled. EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 10; /* Distribution */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 13; /* LogReader */ EXEC PerfDB.dbo.MonitorEnableReplicationAgents @CategoryId = 11; /* Distribution clean up */ -- Verify that Snapshot agents are disabled and have no schedule EXEC PerfDB.dbo.MonitorSnapshotAgentsSchedules; Want to read more of about replication? Check at my replication posts at my blog.

    Read the article

  • Executing Secondary Applications

    - by JooBlow
    I have an application I am attempting to make "portable". The application contains a lot of secondary utility functions that I would like to execute on external files(from the app). I tried adding them in the build process but I didn't get any "Executables" for them(just the main one and a few others). Is there a way to get these to excute? They are basically command line utility functions to process some text files but use large files in the distribution and are also used by the main application. Thanks

    Read the article

  • agile as our first project management methodology [closed]

    - by Hasan Khan
    we are a small web development company that has till now been working on client projects. we employed little to no project management and that has cost us a lot. we've used only the barest of tools (wireframing, prototyping etc) but no formal project management process has been put into place. we've learnt from our mistakes and want to prevent them from happening in the future. also, we are looking to develop our own products and we understand that putting in a proper project management paradigm will help. after a lot of research, we've sort of settled on agile for a few reasons: agile seems to scale well with team size. our team is small right now and we hope to grow and agile seems to be a process that we can put in place now and grow with. agile will help us with customers who just can't seem to make up their minds and keep changing requirements. we'd appreciate the community's thoughts on this. is this a correct way to think? will agile be a good system to put into place, where there has been none till now? are there any resources that may help us in our position? pretty much all of the resources that we've found start by comparing agile to x (where x = any management methodology) and why its better than x and how agile can be implemented in place of x. we're looking for resources that can help us out in our particular situation. thanks for all your help!

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Richard Lefebvre
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world. The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.    Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3D Meg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP." The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions." Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform." With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps. Continue reading the full article here.

    Read the article

  • Oracle Partners Delivering Business Transformations With Oracle WebCenter

    - by Brian Dirking
    This week we’ve been discussing a new online event, “Transform Your Business by Connecting People, Processes, and Content.” This event will include a number of Oracle partners presenting on their successes with transforming their customers by connecting people, processes, and content: Deloitte - Collaboration and Web 2.0 Technologies in Supporting Healthcare, delivered by Mike Matthews, the Canadian Healthcare partner and mandate partner on Canadian Partnership Against Cancer at Deloitte InfoSys - Leverage Enterprise 2.0 and SOA Paradigms in Building the Next Generation Business Platforms, delivered by Rizwan MK, who heads InfoSys' Oracle technology delivery business unit, defining and delivering strategic business and technology solutions to Infosys clients involving Oracle applications Capgemini - Simplifying the workflow process for work order management in the utility market, delivered by Léon Smiers, a Solution Architect for Capgemini. Wipro - Oracle BPM in Banking and Financial Services - Wipro's Technology and Implementation Expertise, delivered by Gopalakrishna Bylahalli, who is responsible for the Transformation Practice in Wipro which includes Business Process Transformation, Application Transformation and Information connect. In Mike Matthews’ session, one thing he will explore is how to CPAC has brought together an informational website and a community. CPAC has implemented Oracle WebCenter, and as part of that implementation, is providing a community where people can make connections and share their stories. This community is part of the CPAC website, which provides information of all types on cancer. This make CPAC a one-stop shop for the most up-to-date information in Canada.

    Read the article

  • Warnings When Undo Isn't Possible

    - by ultan o'broin
    Enjoyed this post Never Use a Warning When you Mean Undo by Aza Raskin. It makes sense never to warn users if an undo option is possible. The examples given are from the web space. Here's the conclusion: Warnings cause us to lose our work, to mistrust our computers, and to blame ourselves. A simple but foolproof design methodology solves the problem: "Never use a warning when you mean undo." And when a user is deleting their work, you always mean undo. However, in enterprise apps you may find that an undo option isn't technically possible or desirable. Objects may be shared, part of a flow elsewhere, or undoing something committed to the database (a rollback I guess) may not be feasible if it becomes locked by another process. Plus, what constitutes user ownership of objects isn't always clear to users. The implications of delete (and other) actions need to be clearly communicated out in advance. Really, warnings are important in the enterprise space. Data has a very high value, and users can perform a wide variety of actions that may risk that data, not always within the application itself (at browser level, for example). That said, throwing warnings all over the place when an undo option is possible is annoying. Instead, treat warnings with respect. When there is no undo option possible, use warning messages to communicate potentially dangerous or irrecoverable actions or the downstream consequences of user actions on the process or task flow. Force the user to respond to a warning message by using a modal dialog with clearly labeled action buttons. Here's a couple of examples. A great article that got me thinking. Let's see more like that. Let's not forget there's more types of messages than just error messages. User assistance and user experience professionals need to understand when best to use confirmation, information, and warning types too!

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >