Search Results

Search found 5503 results on 221 pages for 'workflow activity'.

Page 43/221 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Your favorite time-off between programming.

    - by Harsha
    I am sure all coding pundits here will have one (or multiple) ways of spending some time-off between hectic coding sessions just to relax. Would love to hear from you all as I am a newbie and want to take little breaks (non-physical) from coding and do things which actually help me focus again. Thanks.

    Read the article

  • How do I get the value of the item selected in ListView?

    - by user357032
    I thought I could use the position int, but when I click on the item in the list view, nothing happens. Please help! ListView d = (ListView) findViewById(R.id.apo); ArrayAdapter adapt = ArrayAdapter.createFromResource(this, R.array.algebra, android.R.layout.simple_list_item_1); d.setAdapter(adapt); d.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { if (position == '0') { Intent intent = new Intent(Algebra.this, Alqv.class); startActivity(intent); } if (position == '2') { Intent intent1 = new Intent(Algebra.this, qfs.class); startActivity(intent1); } }); }

    Read the article

  • Creating a new naming context in OUD

    - by Sylvain Duloutre
    A naming context (also known as a directory suffix) is a DN that identifies the top entry in a locally held directory hierarchy. A new naming context can be created using ODSM, the OUD gui admin console, as described in http://docs.oracle.com/cd/E29407_01/admin.111200/e22648/server_config.htm#CBDGCJGF It can also be created using the dsconfig command lione as described below: Creation of a new naming context consists in 3 steps: First create a Local Backend Workflow element (myNewDb in this exemple) ,  responsible for the naming context base dn, e.g o=example. dsconfig create-workflow-element \           --set base-dn:o=example \           --set enabled:true \           --type db-local-backend \           --element-name myNewDb \           --hostname <your host> \           --port <admin port> \           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Second, create a Workflow element (workFlowForMyNewDb in this exemple) associated with the Local Backend Workflow element. WorkFlow elements are used to route LDAP requests to the appropriate database, based on the target base dn. dsconfig create-workflow \           --set base-dn:o=example \           --set enabled:true \           --set workflow-element:myNewDb \           --type generic \           --workflow-name workFlowForMyNewDb \           --hostname <your host name> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt Then, the workflow element must be made visible outside of the directory, i.e added to the internal "routing table". This is done by adding the Workflow to the appropriate Network Group. A Network group  is used to classify incoming client connections and route requests to workflows. dsconfig set-network-group-prop \           --group-name network-group \           --add workflow:workFlowForMyNewDb \           --hostname <your hostname> \           --port <admin port>\           --bindDN cn=Directory\ Manager \           --bindPasswordFile ****** \           --no-prompt At that stage, it is possible to import entries to the new naming context o=example.

    Read the article

  • How do you increase the number of processes in parallel with Powershell 3?

    - by Mark Shay
    I am trying to run 20 processes in parallel. I changed the session as below, but having no luck. I am getting only up to 5 parallel processes per session. $wo=New-PSWorkflowExecutionOption -MaxSessionsPerWorkflow 50 -MaxDisconnectedSessions 200 -MaxSessionsPerRemoteNode 50 -MaxActivityProcesses 50 Register-PSSessionConfiguration -Name ITWorkflows -SessionTypeOption $wo -Force Get-PSSessionConfiguration ITWorkflows | Format-List -Property * Is there a switch parameter to increase the number of processes? This is what I am running: Workflow MyWorkflow1 { Parallel { InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 2 and 2975416"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 2975417 and 5950831"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 5950832 and 8926246"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 8926247 and 11901661"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 11901662 and 14877076"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns"where OrderId between 14877077 and 17852491"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 17852492 and 20827906"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 20827907 and 23803321"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 23803322 and 26778736"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 26778737 and 29754151"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 29754152 and 32729566"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 32729567 and 35704981"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 35704982 and 38680396"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 38680397 and 432472144"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 432472145 and 435447559"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 435447560 and 438422974"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 864944289 and 867919703"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 867919704 and 870895118"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 870895119 and 1291465602"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 1291465603 and 1717986945"} }

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • What characteristic of networking/TCP causes linear relation between TCP activity and latency?

    - by DeLongey
    The core of this problem is that our application uses websockets for real-time interfaces. We are testing our app in a new environment but strangely we're noticing an increasing delay in TCP websocket packets associated with an increase in websocket activity. For example, if one websocket event occurs without any other activity in a 1-minute period, the response from the server is instantaneous. However, if we slowly increase client activity the latency in server response increases with a linear relationship (each packet will take more time to reach the client with more activity). For those wondering this is NOT app-related since our logs show that our server is running and responding to requests in under 100ms as desired. The delay starts once the server processes the request and creates the TCP packet and sends it to the client (and not the other way around). Architecture This new environment runs with a Virtual IP address and uses keepalived on a load balancer to balance the traffic between instances. Two boxes sit behind the balancer and all traffic runs through it. Our host provider manages the balancer and we do not have control over that part of the architecture. Theory Could this somehow be related to something buffering the packets in the new environment? Thanks for your help.

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Why am I seeing excessive disk activity when installing applications?

    - by Kev
    I'm running Windows 7 Ultimate 64 bit on a Dell Vostro 1720 with 8GB of RAM, 7200RPM Disk, 2.53 GHz Core2Duo (Windows 7 64 bit is a supported option and the laptop came with the OS pre-installed). I'm noticing some fairly excessive disk activity when running installers. For example the Visual Studio 2010 RC installer constantly accessed the disk for ~10 minutes. It was so excessive that I was unable to use the machine until this ceased. Today I installed Trillian Astra 4.1 for Windows (latest build from the website). Again when I ran the installer I was pretty much locked out of the machine until the disk activity calmed down. In both cases when I eventually managed to launch task manager I could see that the CPU was sitting at around 5% to 7% utilisation whilst this was going on. All other disk related activity is fine, the machine is snappy and applications launch without delay. It's just when I run an installer I see this odd behaviour. Why would this be?

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • Does SQL Server Management Studio 2008 Activity Monitor work with SQL Server 2000?

    - by Andrew Janke
    I am trying to use SQL Server Management Studio 2008's Activity Monitor with an SQL Server 2000 instance to diagnose some query performance issues. I can connect SMSS 2008 to the db fine, and use it to browse objects and run queries. But when I press the Activity Monitor button, it pops up an error message saying: Microsoft SQL Server Management Studio This operation does not support connections to Microsoft SQL Server Personal Edition version 8.00.818. This MSDN article implies that Activity Monitor works with SQL Server 2000. Is it the fact that it's Personal Edition that's preventing it from working? The error message isn't clear whether it's the edition or version that's the problem.

    Read the article

  • Github Workflow: Pushing small fix branches to remote, or keep them local?

    - by Isaac Hodes
    In Scott Chacon's workflow (explained eg in this SO answer), with essentially two silos (development, and master), if, say I have a small bug to fix (e.g. can be fixed with a few characters) is the optimal way of doing that: a) branch off of development a branch called e.g. fix_123. Push this branch to origin as I work on it. When it's done, code-reviewed, whatever, merge into development and push development to origin. b) Same as above, but without pushing fix_123 to origin.

    Read the article

  • What diagrams, other than the class diagram and the workflow diagram, are useful for explaining how an application works?

    - by Goran_Mandic
    I am working on a small Delphi project, composed of two units. One unit is for the GUI, and the other for data management, file parsing, list iterating and so on.. I've already made a class diagram, and my workflow looks like hell- it's too complex, even for anyone to read. I've considered making a dataflow diagram, but it would be even more complex. A use case diagram wouldn't be of use either. Am I missing some diagram type which could somehow represent the relationship between my two units?

    Read the article

  • How to improve workflow between developer and designer with Expression Blend?

    - by Amenti
    We use WPF and Expression Blend 4. I'm trying to improve our workflow by tutoring one of our designers to use it for styling and animation. Slowly but surely I get the impression Blend in itself is to technical for the designer in question. I myself use it only occasionally (it's great for Visual States for instance) because a lot of things are easier done in code or not possible at all in Blend alone. It seems a developer with design experience is a lot more productive with it than a sole designer. Are there any good online resources or advice you could give me how to improve this situation?

    Read the article

  • How to refresh an Android RelativeLayout when orientation changes without restarting the Activity?

    - by johnrock
    I have an Android Activity with a RelativeLayout and I have implemented the following method to prevent the activity from being recreated on change of Orientation: @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); } I am obviously not doing anything in this method, but it worked perfect when using a LinearLayout. Now however, using RelativeLayout, my layout is all messed up when changing to landscape orientation. What is the most efficient way to have the screen redraw correctly without having the activity restarted again with a call to onCreate?

    Read the article

  • How should I deal with persisting an Android WebView if my activity gets destroyed while showing ano

    - by Shaun Crampton
    Hi, I'm building an Android application that's based around an enhanced WebView (based on PhoneGap). I've enhanced the WebView so that from JavaScript running inside you can invoke the native contact picker to choose a phone number (which may be supplied by Facebook for example). The problem I have is that the native contact picker runs in an activity in another process and the Android docs say that while another activity is open my activity may get destroyed due to memory constraints. I haven't actually seen this happen in my application but if it did then I'm guessing my WebView's state would be destroyed and the code that was waiting for the picked contact would be terminated. It seems a bit crazy that the activity requesting a contact could be destroyed while the contact picker is open. Does anyone know if that does indeed happen? Is there a way to persist the state of the WebView if it does? Thanks, -Shaun

    Read the article

  • How to make a phone call in android and comes back to my activity when call is done

    - by hap497
    Hi, I am launching an activity to make a phone call, but when I pressed the 'end call' button, it does not go back to my activity. Can you please tell me how can I launch a call activity which comes back to me when 'End call' button is pressed? This is how I'm making the phone call: String url = "tel:3334444"; Intent intent = new Intent(Intent.ACTION_CALL, Uri.parse(url)); Thank you.

    Read the article

  • Can An Android Appliction Be Launched Without a Main Activity?

    - by Androider
    I have verified that an App does not need a Main Activity, and in fact does not need any activities. Thanks for the responses on this. But here is another question. Is there any way to launch an application without a main activity declared? If the answer is no, then I have a follow up, can the MAIN action be removed from the application at runtime after launch so that the app no longer has a MAIN activity after launch? Or even can the activity itself be entirely removed from the application at runtime if it is no longer needed. Thanks.

    Read the article

  • Custom base class for workflow activity , how to declare dependecy property ?

    - by Ajax2020
    Generally we derive custom activity from Activity or CompositeActivity class, i want to have my custom base class for all activities : like public class BusinessActivity : Activity { } then my custom activities will derive from BusinessActivity . The quetions is if i want ErrorMessageProperty and RetryCountProperty as dependency properties then in the BusinessActivity how to declare third param in Register method like public static DependencyProperty RetryCountProperty = DependencyProperty.Register("RetryCount", typeof(int), typeof(**BusinessActivity**)); What will happen if i derive from BusinessActivity class ?

    Read the article

  • Next Fusion CRM Webinar for Partners (Monday March 19th, 3pm GMT): Fusion CRM User Interface, Activity Streams and Opportunity management

    - by Richard Lefebvre
    The next session of our weekly Fusion CRM webinar for EMEA partners will take place Monday March the 19th at 3pm GMT / 4pm CET and will address the Fusion CRM User Interface, Activity Streams and Opportunity management In order to check the complete agenda and see login-details, please visit our dedicated microsite. How to join the dedicated microsite: Click on http://isdportal.oracle.com/isd_html/sf.htm Enter your Email Address in the corresponding field Enter fusion_crm in the “Access URL/Page Token” field Agenda: The list of sessions is published and will be regularly updated in the microsite. Duration: Each session lasts up to 60 minutes Webex: The respective webinar link and session ID are published in the microsite Audio:  The audio call details (telephone numbers by country, call number and password) is indicated in the microsite Slides: For your convenience, a pdf copy of each presentation will be stored in the microsite’s document section. We hope that this series of webcasts will be instrumental to your way of Fusion CRM business success!  For further information please contact me at [email protected]

    Read the article

  • Android - Force Close - Null Pointer on Canvas?

    - by user22241
    Please bear with me. I have a very odd problem. Basically, my app so far, has 3 activities (a main splash screen, an 'options/menu' screen and the main app). If I follow the very specific steps oulined below, I get a 'null pointer exception' in the 2nd activity) and the app force closes...... Here are the steps: Start the app (a game based on Surfaceview), tap through to the third activity so the game is running, then hit the home key so the game is paused and put to the background, the activity/app is ended through DDMS in the SDK then restarted on the device (all OK so far), now if I hit the back key on the device twice in quick succession, it happens. All other sequence of events is fine, even to the point of pressing the back key, waiting for the previous activity to show, then hitting back again - all OK. Only when the back key is pressed twice in quick succession following all the above steps does the problem occur. I'm assuming that the canvas isn't ready as it's showing as 'null' when this happens, but I'm not sure why this is happening as surely it's happening when I'm trying to go back to activity 1, but the logcat shows the error in activity 2. if I stop the activity running my 'doDraw' method (which referenced the canvas), then all is OK - so I can safely assume it is the canvas causing the problem. Also, if I skip my first activity (which is a very basic full-screen button which just displays a splashscreen and waits for the user to tap the screen), and make my 2nd activity the launch activity, again, it is OK. this is the part of the code that I think is probably relevant: @Override public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) { vheight = this.getHeight(); vwidth = this.getWidth(); } @Override public void surfaceCreated(SurfaceHolder holder) { vheight = this.getHeight(); vwidth = this.getWidth(); this.viewWidth = vwidth; this.viewHeight = vheight; if (runthread==false){ if (preThread.getState()==Thread.State.TERMINATED){ preThread = new OptionsThread(thisholder, thiscontext, thishandler); } preThread.setRunning(true); preThread.start();} } @Override public void surfaceDestroyed(SurfaceHolder holder) { preThread.setRunning(false); //Stop the loop boolean retry = true; //Stop the thread while (retry) { try { preThread.join(); retry = false; } catch (InterruptedException e) { } } Thank you all for any help you can offer

    Read the article

  • Optimierter Workflow durch neues Geo-Datawarehouse: Ein Erfolgsprojekt für die LINZ AG – von CISS TDI und Primebird

    - by A&C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Die Oracle Partner CISS TDI, PRIMEBIRD und deren gemeinsames Großprojekt für die LINZ AGDie österreichische LINZ AG ist als Energieversorgungsunternehmen unter anderem für Strom, Gas und Fernwärme, das Wasser- und Kanalsystem sowie den öffentlichen Personennahverkehr zuständig. Seit Jahren schon nutzt sie zur Verwaltung ihrer stetig wachsenden Geodaten-Bestände Oracle Lösungen. 2012 nun haben die Oracle Partner CISS TDI und PRIMEBIRD die bisherige Oracle Lösung zu einem “Geo-Datawarehouse” ausgebaut und das Datenmodell für die Internet-Planauskunft optimiert. Das neue Datawarehouse stellt die geografischen Datenbestände der LINZ AG in einheitlicher Struktur dar und ermöglicht so eine deutliche Workflow-Optimierung. Die Voreile: der administrative Aufwand wurde reduziert, der Prozess der Datensammlung vereinheitlicht und der notwendige Datenexport, etwa an Bauträger oder die Kommune, läuft mit der neuen Web-Anwendung reibungslos. Details zum genauen Projektverlauf, den spezifischen Anforderungen bei Geodaten und zur Zusammenarbeit zwischen der Linz AG, CISS TDI und PRIMEBIRD finden Sie hier im Anwenderbericht Linz AG.Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen dann sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen.Haben Sie Interesse? Dann wenden Sie sich an Frau Marion Aschenbrenner. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in 3-4 Sätzen und Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

  • Optimierter Workflow durch neues Geo-Datawarehouse: Ein Erfolgsprojekt für die LINZ AG – von CISS TDI und Primebird

    - by A&C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Die Oracle Partner CISS TDI, PRIMEBIRD und deren gemeinsames Großprojekt für die LINZ AGDie österreichische LINZ AG ist als Energieversorgungsunternehmen unter anderem für Strom, Gas und Fernwärme, das Wasser- und Kanalsystem sowie den öffentlichen Personennahverkehr zuständig. Seit Jahren schon nutzt sie zur Verwaltung ihrer stetig wachsenden Geodaten-Bestände Oracle Lösungen. 2012 nun haben die Oracle Partner CISS TDI und PRIMEBIRD die bisherige Oracle Lösung zu einem “Geo-Datawarehouse” ausgebaut und das Datenmodell für die Internet-Planauskunft optimiert. Das neue Datawarehouse stellt die geografischen Datenbestände der LINZ AG in einheitlicher Struktur dar und ermöglicht so eine deutliche Workflow-Optimierung. Die Voreile: der administrative Aufwand wurde reduziert, der Prozess der Datensammlung vereinheitlicht und der notwendige Datenexport, etwa an Bauträger oder die Kommune, läuft mit der neuen Web-Anwendung reibungslos. Details zum genauen Projektverlauf, den spezifischen Anforderungen bei Geodaten und zur Zusammenarbeit zwischen der Linz AG, CISS TDI und PRIMEBIRD finden Sie hier im Anwenderbericht Linz AG.Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen dann sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen.Haben Sie Interesse? Dann wenden Sie sich an Frau Marion Aschenbrenner. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in 3-4 Sätzen und Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >