Search Results

Search found 21309 results on 853 pages for 'electronic leasing process'.

Page 170/853 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Need a really simple client manage script to deliver graphics and revisions, please help?

    - by Mark R
    I am looking for a very simple client management script. The process flow of the script should be: Client orders (paypal etc) while giving specs on what they need given login details and thanked for their order backend for them consists of a 2 way communication. They ask questions we answer. We also upload the graphics here where they either accept them or as for revision. process complete. Now I cannot for the life of me find something as simple as this. It seems all the scripts out there are way too complicated. Does anyone know of one I can use to do this?

    Read the article

  • Plans for Java 7 and E-Business Suite Certification

    - by Steven Chan (Oracle Development)
    As of June 2012, Java 7 has not been certified yet with Oracle E-Business Suite.  EBS customers should continue to run JRE 6 on their Windows end-user desktops, and JDK 6 on their EBS servers. If a search engine has brought you to this article, please check the Certifications summary for our latest certified Java release. Our plans for certifying Java 7 for the E-Business Suite We plan on releasing the Java 7 certification for E-Business Suite customers in two phases: Phase 1: Certify JRE 7 for Windows end-user desktops Phase 2: Certify JDK 7 for server-based components When will Java 7 be certified with EBS? We're working on the first phase now. As usual, I cannot discuss release dates here, but you can monitor or subscribe to this blog for updates. Current known issues with JRE 7 in EBS environments Our current testing shows that there are known incompatibilities between JRE 7 and the Forms-invocation process in EBS environments.  We have been working directly with the Java division on this for a while now.  In the meantime, EBS customers should not deploy JRE 7 to their end-user Windows desktop clients. You should stick with JRE 1.6 for now.  But wait, you previously said... Older JRE certification announcements stated: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and higher.  We test all new JRE releases in parallel with the JRE development process, so all JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE releases to your EBS users' desktops. Yes, this is true.  This standard boilerplate text was written before JRE 7 was released, so there was no possibility of misunderstanding.  With the availability of JRE 7, that boilerplate needs to be revised to read: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • BPM Parallel Multi Instance sub processes by Niall Commiskey

    - by JuergenKress
    Here is a very simple scenario: An order with lines is processed. The OrderProcess accepts in an order with its attendant lines. The Fulfillment process is called for each order line. We do not have many order lines, and the processing is simple, so we run this in parallel. Let's look at the definition of the Multi Instance sub-process - Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM,Niall Commiskey,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Best practices for Persona development

    - by user12277104
    Over the years, I have created a lot of Personas, I've co-authored a new method for creating them, and I've given talks about best practices for creating your own, so when I saw a call for participation in the OpenPersonas project, I was intrigued. While Jeremy and Steve were calling for persona content, that wasn't something I could contribute -- most of the personas I've created have been proprietary and specific to particular domains of my employers. However, I felt like there were a few things I could contribute: a process, a list of interview questions, and what information good personas should contain. The first item, my process for creating data-driven personas, I've posted as a list of best practices. My next post will be the list of 15 interview questions I use to guide the conversations with people whose data will become the personas. The last thing I'll share is a list of items that need to be part of any good persona artifact -- and if I have time, I'll mock them up in a template or two. 

    Read the article

  • Fast Data Executive Round Table FY14 event kit

    - by JuergenKress
    We are very interested to run joint marketing events jointly with you as our partners! At our SOA Community Workspace (SOA Community membership required) you can find a new Fast Data Executive Round Table FY14 event kit. This event is designed at senior IT and executives level for the purposes of education, awareness, and thought leadership around the subject of big data; and a specific flavor of big data - Fast Data - that has begun to spark the imagination of many Oracle customers. Fast Data is not new. It’s a term that was invented initially by Ovum’s Tony Baer as a way to represent the collection of ‘high velocity’ solutions with respect to the big data. For Oracle, the Fast Data campaign in FY13 began as a way to tie a broader set of solutions together (SOA/Business Process Management, Data Integration and Business Analytics) under a set of use cases focused on real-time, high velocity data. It has helped to give Oracle a leap-frog advantage over many of the niche integration vendors (i.e. Informatica, Pega, Tibco, Software AG, Terracotta) who haven’t been able to address these types of end-to-end use cases which rely on the combination of filtering, in-memory data processing, correlation, real-time data movement and transformation, end-to-end analytics, and business process management. Only Oracle can address all the dimensions of fast data, and only Oracle can provide a set of engineered solutions to address this space. This event is designed to continue that thought leadership momentum and raise the awareness about what Oracle Fast Data solutions are designed to solve. It’s designed to highlight real customer solutions and articulate the business benefits that fast data can address. This is not an event that gets into the esoteric technical standards of Hadoop, NoSQL, and in-memory data grids. This is an event that instead gets into the heart of business problems that big data has left un-addressed and charts the path for next steps in fast data. Get the Fast Data Executive Round Table FY14 event kit here. Support marketing campaigns We can support such events by: Oracle speakers - contact your partner manager Marketing budget - contact your A&C marketing manager Event location - free use of Oracle Customer Visitor Centers conference rooms Promote your event at events.oracle.com: http://tinyurl.com/eventspecialized E-Blast: invite customers to your event – contact your A&C marketing manager For additional marketing kits e.g for Business Process Managementplease visit our SOA Community Workspace. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags:

    Read the article

  • Premera Blue Cross Deploys PeopleSoft Enterprise 9.1 Human Capital Management, Financial Management, Enterprise Learning Management and Enterprise Portal Solutions

    - by jay.richey
    Optimum Solutions Implements Oracle's PeopleSoft Enterprise 9.1 at Premera Blue Cross Premera chose to upgrade to the latest version of PeopleSoft to help the company achieve its strategic goals, which include building and maintaining a skilled employee team that enables the company to deliver highly efficient and valuable service to plan subscribers, sponsors, and healthcare providers. Its decision was influenced by the key capabilities in PeopleSoft Talent Management 9.1, as well as the common technology enhancements for the PeopleSoft PeopleTools 8.50 toolset across all business process areas, which has helped Premera to maximize process automation, increased ease of use, and minimize long term IT support overhead. Read more...

    Read the article

  • Innovation for Retailers

    - by David Dorf
    One of my main objectives for this blog is to point out emerging technologies and how they might apply to the retail industry.  But ideas are just the beginning; retailers either have to rely on vendors or have their own lab to explore these ideas and see which ones work.  (A healthy dose of both is probably the best solution.)  The Nordstrom Innovation Lab is a fine example of dedicating resources to cultivate ideas and test prototypes. The video below, from 2011, is a case study in which the team builds an iPad app that helps customers purchase sunglasses in the store.  Customers take pictures of themselves wearing different sunglasses, then can do side-by-side comparisons. There are a few interesting take-aways from their process.  First, they are working in the store alongside employees and customers.  There's no concept of documenting all the requirements then building the product.  Instead, they work closely with those that will be using the app in order to fully understand what's needed.  When they find an issue, they change the software onsite and try again.  This iterative prototyping ensures their product hits the mark.  Feels like Extreme Programming if you recall that movement. Second, they have time-boxed the project to one week.  Either it works or it doesn't, and either way they've only expended a week's worth of resources.  Innovation always entails failure, and those that succeed are often good at detecting failure quickly then adjusting.  Fail fast and fail often. Third, its not always about technology.  I was impressed they used paper designs to walk through user stories and help understand the needs of the customer.  Pen and paper is the innovator's most powerful tool. Our Retail Applied Research (RAR) team uses some of these concepts in our development process.  (Calling it a process is probably overkill.)  We try to give life to concepts quickly so the rest of organization can help us decide if we're heading the right direction.  It takes many failures before finding a successful product.

    Read the article

  • Which parallel pattern to use?

    - by Wim Van Houts
    I need to write a server application that fetches mails from different mail servers/mailboxes and then needs to process/analyze these mails. Traditionally, I would do this multi-threaded, launching a thread for fetching mails (or maybe one per mailbox) and then process the mails. We are moving more and more to servers where we have 8+ cores, so I would like to make use of these cores as much as possible (and not use 1 at 100% and leave the seven others untouched). So conceptually, as an example, it would be nice that I could write the application in such a way that two cores are "continuously" fetching emails and four cores are "continuously" processing/analyzing the emails (since processing and analyzing mails is more CPU intensive than fetching mails). This seems like a good concept, but after studying some parallel patterns, I'm not really sure how this is best implemented. None of the patterns really fit. I'm working in VS2012, native C++, but I guess from a design point of view this does not really matter and just some pointers on how to organize this would be great!

    Read the article

  • Polipo dpkg failure problem [closed]

    - by ICXC
    Possible Duplicate: polipo E: Sub-process /usr/bin/dpkg returned an error code (1) This is the error I get each time I try to install polipo with the command apt-get install polipo or when I try to install it from Ubuntu software center: Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: polipo Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up polipo (1.0.4.1-1.1) ... Starting polipo: Couldn't open config file /etc/polipo/config: 2. invoke-rc.d: initscript polipo, action "start" failed. dpkg: error processing polipo (--configure): subprocess installed post-installation script returned error exit status 1 How can I solve this?

    Read the article

  • Azure website that talks to third party services

    - by Andy Frank
    I have website that crawls data from many third party services when user browse to webpage. This can be really slow because I hit third party server and process returned data before showing it to user. I am hosting website on Azure (shared mode). I am thinking to improve my implementation. Here is what I am thinking... Run a service that crawls data from third party services, process it and then store it in database. when user browse to my site, my site pulls data from database and display them to user. But above solution is not clear to me. Should I have normal service or wcf service? If wcf service then should website talk to database or wcf service (that can access data from database)? If normal service then how can I deploy on Azure?

    Read the article

  • I try install dhcp3-server, but /etc/init.d/dhcp3-server file is missing. Whats going on?

    - by hydroparadise
    I've found several how-to's that essentially go through the same process in setting up DHCP. Here's a link that has the list of step needed install and setup DHCP. I follow all the steps and find that the sudo /etc/init.d/dhcp3-server restart portion returns command not found only to reveal the the file doesn't even exist. I've installed (sudo apt-get install dhcp3-server) and uninstalled (sudo apt-get remove dhcp3-server) two or three times. I'm pretty sure my config files are good because I've had to check them three times. But I don't think I'm to that point of being able to see them action it. I cant control the process. Why is the file missing? How is it supposed to get there? Help?

    Read the article

  • Embedding Pygame to C++ [closed]

    - by Pendertuga
    If embedding Pygame to C++ to have a game be an executable, is there any extra process I would have to use in order to use Pygame functions when embedding into C++? As opposed to just writing embedding code in C++ for normal Python code? To clear cut the question I want to know if it's the same process without having to call different functions. EDIT: My question is if I have to call different functions in C++ when embedding Python code that uses Pygame modules. I am NOT using pygame2exe nor py2exe. I never even mentioned those. My question is solely about code embedding.

    Read the article

  • MySQLwith mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • Slow writing HDD speed, Ubuntu 12.04 64-bit, Thinkpad T520i

    - by pyc
    It seems that (but I'm not completely sure), that when I'm copying files from gigabit network to HDD, I can't use full potential of the network which in my case is about 60 MB/s, because HDD writing is so slow like lower than 10 MB/s, and also it's slowing down the whole system which becomes pretty much unresponsive, almost impossible to work with. Copying files to samba share residing at Ubuntu machine, connected to share from Windows 7, I'm completely sure my network equipment is OK, and there's no CPU intensive process on Ubuntu except smbd getting about 10-20% from time to time which I think is OK. Something here is burried deep I think, maybe even in kernel. Already tried to switch from AHCI to compatibility mode, and turning acpi on and off - nothing helped. So it's like HDD buffer is full and emptying slowly while machine is sluggish, load is about 3 to 4. Somebody experienced the similar problems? Some help on troubleshooting process and identifying the cause would be helpful too :) Thanks!

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Distributing a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually distributing the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt on my git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this distribution process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to distribute it in a professionnal way ?

    Read the article

  • VirtualBox: Start Firefox in Ubuntu via a Windows script?

    - by SpaceRook
    I am using VirtualBox to run Ubuntu 12.04 as a guest in a Windows 7 host. I would like execute a command in Windows that will launch Ubuntu's Firefox. I tried VirtualBox's VBoxManage guestcontrol function. The command seems to do something, but nothing seems to happen in Ubuntu: C:\VirtualBox>VBoxManage.exe guestcontrol MyVirtualMachineUbuntu exec --image "/usr/bin/firefox" --username bob --password password --wait-stdout --verbose Waiting for guest to start process ... Waiting for process to exit ... Exit code=1 (Status=500 [successfully terminated]) The /usr/bin/firefox command works when I run it in Ubuntu. Also, with guestcontrol, I can successfully call /bin/ls. But I can't actually get a major program like Firefox to run. Any ideas? Thanks.

    Read the article

  • Google Chrome wont start after changing hostname

    - by user254473
    I tried to start google chrome in terminal several times, and I keep receiving the following message: ... :ERROR:process_singleton_linux.cc(309)] The profile appears to be in use by another Google Chrome process (8629) on another computer ("previous name of the computer"). Chrome has locked the profile so that it doesn't get corrupted. If you are sure no other processes are using this profile, you can unlock the profile and relaunch Chrome. ... :ERROR:simple_message_box_views.cc(208)] Unable to show a dialog outside the UI thread message loop: Google Chrome - The profile appears to be in use by another Google Chrome process (8629) on another computer ("previous name of the computer"). Chrome has locked the profile so that it doesn't get corrupted. If you are sure no other processes are using this profile, you can unlock the profile and relaunch Chrome. Any suggestions? Thanks in advance.

    Read the article

  • XNA VertexBuffer.SetData performance suggestions

    - by CodeSpeaker
    I have a 3d world in a grid layout where each grid cell contains its separate vertex and index buffer for the mesh/terrain of that cell. When the player moves outside the boundaries of his cell, i dynamically load more cells in his walking direction based on his viewing distance. This triggers x number of vertex and indexbuffer initializations depending on how many cells that needs to be generated and causes the framerate to drop annoyingly during this time. The generation of terrain data is handled in a separate thread and runs smoothly. The vertex and index buffers are added during the update cycle of the game loop. I´ve tried batching the number of cells to be processed to avoid sending too much data at once into the buffers, which worked ok at a shorter viewing distance (about 9 cells to process), but not as well at greater distances with around 30 cells to process. Any idea how i can optimize this?

    Read the article

  • Processing a list of atomic operations, allowing for interruptions

    - by JDB
    I'm looking for a design pattern that addresses the following situation: There exists a list of tasks that must be processed. Tasks may be added at any time. Each task is wholly independent from all other tasks. The order in which tasks are processed has no effect on the overall system or on the tasks themselves. Every task must be processed once and only once. The "main" process which launches the task processors may start and stop without warning. When stopped, the "main" process loses all in-memory data. Obviously this is going to involve some state, but are there any design patterns which discuss where and how to maintain that state? Are there any relevant anti-patterns? Named patterns are especially helpful so that we can discuss this topic with other organizations without having to describe the entire problem domain.

    Read the article

  • Cloud Computing Business Benefits

    - by workflowman
    If you have been living under a rock for the past year, you wouldn't have heard about cloud computing. Cloud computing is a loose term that describes anything that is hosted in data centers and accessed via the internet. It is normally associated with developers who draw clouds in diagrams indicating where services or how systems communicate with each other. Cloud computing also incorporates such well-known trends as Web 2.0 and Software as a Service (SaaS) and more recently Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). Its aim is to change the way we compute, moving from traditional desktop and on-premises servers to services and resources that are hosted in the cloud.  Benefits of Cloud Computing  There are clearly benefits in building applications using cloud computing, some of which are listed here:  Zero up- front investment:  Delivering a large-scale system costs a fortune in both time and money. Often IT departments are split into hardware/network and software services. The hardware team provisions servers and so forth under the requirements of the software team. Often the hardware team has a different budget that requires approval. Although hardware and software management are two separate disciplines, sometimes what happens is developers are given the task to estimate CPU cycles, disk space, and so forth, which ends up in underutilized servers.  Usage-based costing:  You pay for what you use, no more, no less, because you never actually own the server. This is similar to car leasing, where in the long run you get a new car every three years and maintenance is never a worry.  Potential for shrinking the processing time:  If processes are split over multiple machines, parallel processing is performed, which decreases processing time.  More office space:  Walk into most offices, and guaranteed you will find a medium- sized room dedicated to servers.  Efficient resource utilization:  The resource utilization is handed by a centralized cloud administrator who is in charge of deciding exactly the right amount of resources for a system. This takes the task away from local administrators, who have to regularly monitor these servers.  Just-in-time infrastructure:  If your system is a success and needs to scale to meet demand, this can cause further time delays or a slow- performing service. Cloud computing solves this because you can add more resources at any time.  Lower environmental impact:  If servers are centralized, potentially an environment initiative is more likely to succeed. As an example, if servers are placed in sunny or windy parts of the world, then why not use these resources to power those servers?  Lower costs:  Unfortunately, this is one point that administrators will not like. If you have people administrating your e-mail server and network along with support staff doing other cloud-based tasks, this workforce can be reduced. This saves costs, though it also reduces jobs.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >