Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 259/976 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • How Do I Log Into CertView?

    - by Brandye Barrington
    On November 15, 2012, Oracle Certification exam results became available directly from Oracle's certification portal, CertView. This change requires candidates to authenticate their CertView accounts before being able to access their exam results. The Oracle Certification team has developed a series of videos to help candidates through this new process.  Check back often as we will be highlighting these videos over the coming weeks. You can find all the information you will need on this new process along with relevant questions and answers on our website. As always, you can contact Oracle Certification Support if you need assistance. YOUR QUESTIONS ANSWERED More Information FAQ: Receiving Exam Scores FAQ: How To Get Exam Results FAQ: Accessing Exam Results in CertView FAQ: How Will I Know When My Exam Results Are Available? FAQ: What If I Don't Get An Exam Results Email Alert? FAQ: How To Download and Print Exam Score Reports FAQ: What If I Think My Exam Results Are Wrong In CertView? FAQ: Is Oracle Changing The Way That Exams Are Scored?

    Read the article

  • Adding Thunderbird-stable repository gives "can't find signing_key_fingerprint" error

    - by EBV2010
    I'm trying to install Thunderbird 11 on Kubuntu 10.04. I was able to do it on the machine I'm working on. To get a clean process that I can roll out to other clients, I re-installed the machine and repeated the process. This is what I did (I've left out the sudo for clarity): add-apt-repository ppa:ubuntu-mozilla-security/ppa apt-get update add-apt-repository ppa:mozilla-team/thunderbird-stable The last one resulted in this error: Error: can't find signing_key_fingerprint at https://launchpad.net/api/1.0/~mozilla-team/+archive/thunderbird-stable The machine as it was before re-installation gave no such message. It was built from the same sources. Bottomline: I got Thunderbird 11.0 to run on Kubuntu 10.04 but after re-installation, adding the repository gives an error and won't add. Is there a way to solve the signing_key_fingerprint error?

    Read the article

  • How to remove the Ubuntu Gnome desktop after making the switch to KDE?

    - by codeLes
    This is the opposite of this question. Basically I've been using Ubuntu for a while but decided to give KDE a shot so I went through the process of getting the latest KDE installed. I'm very impressed with KDE and the Kwin window manager seems like a better WM than Compiz which is what I was using for Gnome (sure that's an oppinion). This was an Ubuntu Jaunty install. So how do I go about removing the Gnome desktop? Is there an automated way similar to what my previous question covered? UPDATE: Should there be any packages I should NOT remove in the process?

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • How do I convince my boss that it's OK to use an application to access an outside website?

    - by Cyberherbalist
    That is, if you agree that it's OK. We have a need to maintain an accurate internal record of bank routing numbers, and my boss wants me to set up a process where once a week someone goes to the Federal Reserve's website, clicks on the link to get the list of routing numbers (or the link giving the updates since a particular date), and then manually uploads the resultant text file to an application that will make the update to our data. I told him that a manual process was not at all necessary, and that I could write a routine that would access the FED's routing numbers in the application that keeps our data updated, and put it on whatever schedule was appropriate. But he is greatly opposed to doing this, and calls it "hacking the Federal Reserve website." I think he's afraid that the FED is going to get after us. I showed him the FED's robot.txt file, and the only thing it forbids is an automated indexing of pages with extension .cf*: User-agent: * # applies to all robots Disallow: CF # disallow indexing of all CF* directories and pages This says nothing about accessing the same data automatically that you could access manually. Anyone have a good counterargument to the idea that we'd be "hacking" the FED?

    Read the article

  • Web based interface for open SSL client certificates

    - by Felix
    Hi there! We are currently developing a apache2-based web application and want to invite some beta testers to give it a try. To be on the safe side, access should be provided by individual browser certificates (.p12) which are issued using a (fake) CA. Our users should be passing a complete register/login process and some of them will be granted administrative privileges within the application. That's why a preceding simple web-based authentication won't be sufficient. Atm, I am using a serverside shellscript to generate the certificates each time. Do you know about a small, web-based tool to simplify the process of generating / revoking those certificates? Maybe an overview of the CA's index.txt plus the option to revoke a cert and a link to download them directly?

    Read the article

  • Financial institutions build predictive models using Oracle R Enterprise to speed model deployment

    - by Mark Hornick
    See the Oracle press release, Financial Institutions Leverage Metadata Driven Modeling Capability Built on the Oracle R Enterprise Platform to Accelerate Model Deployment and Streamline Governance for a description where a "unified environment for analytics data management and model lifecycle management brings the power and flexibility of the open source R statistical platform, delivered via the in-database Oracle R Enterprise engine to support open standards compliance." Through its integration with Oracle R Enterprise, Oracle Financial Services Analytical Applications provides "productivity, management, and governance benefits to financial institutions, including the ability to: Centrally manage and control models in a single, enterprise model repository, allowing for consistent management and application of security and IT governance policies across enterprise assets Reuse models and rapidly integrate with applications by exposing models as services Accelerate development with seeded models and common modeling and statistical techniques available out-of-the-box Cut risk and speed model deployment by testing and tuning models with production data while working within a safe sandbox Support compliance with regulatory requirements by carrying out comprehensive stress testing, which captures the effects of adverse risk events that are not estimated by standard statistical and business models. This approach supplements the modeling process and supports compliance with the Pillar I and the Internal Capital Adequacy Assessment Process stress testing requirements of the Basel II Accord Improve performance by deploying and running models co-resident with data. Oracle R Enterprise engines run in database, virtually eliminating the need to move data to and from client machines, thereby reducing latency and improving security"

    Read the article

  • Why is System listening on port 8000?

    - by poke
    I noticed by accident today that I have some unknown webserver listening on port 8000. Opening http://localhost:8000 just returns 404, so I don’t get any hint what exactly is listening there. I’ve used netstat -ano to find out, that the process with PID 4 is listening on that port. PID 4 is the System process. Why is my system listening on that port, without me actually starting a server? Or how can I find out what exactly is listening there? I’ve read the related questions about port 80 and port 443, but none of the services mentioned there were running on my system. And the other suggestions there didn’t work either. edit: The HTTP response of the server lists Microsoft-HTTPAPI/2.0 as the server. edit2: As requested by Shadok, here are the entries of TCPView with 8000 as the port. But I doubt it’s useful at all…

    Read the article

  • StreamInsight 2.1, meet LINQ

    - by Roman Schindlauer
    Someone recently called LINQ “magic” in my hearing. I leapt to LINQ’s defense immediately. Turns out some people don’t realize “magic” is can be a pejorative term. I thought LINQ needed demystification. Here’s your best demystification resource: http://blogs.msdn.com/b/mattwar/archive/2008/11/18/linq-links.aspx. I won’t repeat much of what Matt Warren says in his excellent series, but will talk about some core ideas and how they affect the 2.1 release of StreamInsight. Let’s tell the story of a LINQ query. Compile time It begins with some code: IQueryable<Product> products = ...; var query = from p in products             where p.Name == "Widget"             select p.ProductID; foreach (int id in query) {     ... When the code is compiled, the C# compiler (among other things) de-sugars the query expression (see C# spec section 7.16): ... var query = products.Where(p => p.Name == "Widget").Select(p => p.ProductID); ... Overload resolution subsequently binds the Queryable.Where<Product> and Queryable.Select<Product, int> extension methods (see C# spec sections 7.5 and 7.6.5). After overload resolution, the compiler knows something interesting about the anonymous functions (lambda syntax) in the de-sugared code: they must be converted to expression trees, i.e.,“an object structure that represents the structure of the anonymous function itself” (see C# spec section 6.5). The conversion is equivalent to the following rewrite: ... var prm1 = Expression.Parameter(typeof(Product), "p"); var prm2 = Expression.Parameter(typeof(Product), "p"); var query = Queryable.Select<Product, int>(     Queryable.Where<Product>(         products,         Expression.Lambda<Func<Product, bool>>(Expression.Property(prm1, "Name"), prm1)),         Expression.Lambda<Func<Product, int>>(Expression.Property(prm2, "ProductID"), prm2)); ... If the “products” expression had type IEnumerable<Product>, the compiler would have chosen the Enumerable.Where and Enumerable.Select extension methods instead, in which case the anonymous functions would have been converted to delegates. At this point, we’ve reduced the LINQ query to familiar code that will compile in C# 2.0. (Note that I’m using C# snippets to illustrate transformations that occur in the compiler, not to suggest a viable compiler design!) Runtime When the above program is executed, the Queryable.Where method is invoked. It takes two arguments. The first is an IQueryable<> instance that exposes an Expression property and a Provider property. The second is an expression tree. The Queryable.Where method implementation looks something like this: public static IQueryable<T> Where<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate) {     return source.Provider.CreateQuery<T>(     Expression.Call(this method, source.Expression, Expression.Quote(predicate))); } Notice that the method is really just composing a new expression tree that calls itself with arguments derived from the source and predicate arguments. Also notice that the query object returned from the method is associated with the same provider as the source query. By invoking operator methods, we’re constructing an expression tree that describes a query. Interestingly, the compiler and operator methods are colluding to construct a query expression tree. The important takeaway is that expression trees are built in one of two ways: (1) by the compiler when it sees an anonymous function that needs to be converted to an expression tree, and; (2) by a query operator method that constructs a new queryable object with an expression tree rooted in a call to the operator method (self-referential). Next we hit the foreach block. At this point, the power of LINQ queries becomes apparent. The provider is able to determine how the query expression tree is evaluated! The code that began our story was intentionally vague about the definition of the “products” collection. Maybe it is a queryable in-memory collection of products: var products = new[]     { new Product { Name = "Widget", ProductID = 1 } }.AsQueryable(); The in-memory LINQ provider works by rewriting Queryable method calls to Enumerable method calls in the query expression tree. It then compiles the expression tree and evaluates it. It should be mentioned that the provider does not blindly rewrite all Queryable calls. It only rewrites a call when its arguments have been rewritten in a way that introduces a type mismatch, e.g. the first argument to Queryable.Where<Product> being rewritten as an expression of type IEnumerable<Product> from IQueryable<Product>. The type mismatch is triggered initially by a “leaf” expression like the one associated with the AsQueryable query: when the provider recognizes one of its own leaf expressions, it replaces the expression with the original IEnumerable<> constant expression. I like to think of this rewrite process as “type irritation” because the rewritten leaf expression is like a foreign body that triggers an immune response (further rewrites) in the tree. The technique ensures that only those portions of the expression tree constructed by a particular provider are rewritten by that provider: no type irritation, no rewrite. Let’s consider the behavior of an alternative LINQ provider. If “products” is a collection created by a LINQ to SQL provider: var products = new NorthwindDataContext().Products; the provider rewrites the expression tree as a SQL query that is then evaluated by your favorite RDBMS. The predicate may ultimately be evaluated using an index! In this example, the expression associated with the Products property is the “leaf” expression. StreamInsight 2.1 For the in-memory LINQ to Objects provider, a leaf is an in-memory collection. For LINQ to SQL, a leaf is a table or view. When defining a “process” in StreamInsight 2.1, what is a leaf? To StreamInsight a leaf is logic: an adapter, a sequence, or even a query targeting an entirely different LINQ provider! How do we represent the logic? Remember that a standing query may outlive the client that provisioned it. A reference to a sequence object in the client application is therefore not terribly useful. But if we instead represent the code constructing the sequence as an expression, we can host the sequence in the server: using (var server = Server.Connect(...)) {     var app = server.Applications["my application"];     var source = app.DefineObservable(() => Observable.Range(0, 10, Scheduler.NewThread));     var query = from i in source where i % 2 == 0 select i; } Example 1: defining a source and composing a query Let’s look in more detail at what’s happening in example 1. We first connect to the remote server and retrieve an existing app. Next, we define a simple Reactive sequence using the Observable.Range method. Notice that the call to the Range method is in the body of an anonymous function. This is important because it means the source sequence definition is in the form of an expression, rather than simply an opaque reference to an IObservable<int> object. The variation in Example 2 fails. Although it looks similar, the sequence is now a reference to an in-memory observable collection: var local = Observable.Range(0, 10, Scheduler.NewThread); var source = app.DefineObservable(() => local); // can’t serialize ‘local’! Example 2: error referencing unserializable local object The Define* methods support definitions of operator tree leaves that target the StreamInsight server. These methods all have the same basic structure. The definition argument is a lambda expression taking between 0 and 16 arguments and returning a source or sink. The method returns a proxy for the source or sink that can then be used for the usual style of LINQ query composition. The “define” methods exploit the compile-time C# feature that converts anonymous functions into translatable expression trees! Query composition exploits the runtime pattern that allows expression trees to be constructed by operators taking queryable and expression (Expression<>) arguments. The practical upshot: once you’ve Defined a source, you can compose LINQ queries in the familiar way using query expressions and operator combinators. Notably, queries can be composed using pull-sequences (LINQ to Objects IQueryable<> inputs), push sequences (Reactive IQbservable<> inputs), and temporal sequences (StreamInsight IQStreamable<> inputs). You can even construct processes that span these three domains using “bridge” method overloads (ToEnumerable, ToObservable and To*Streamable). Finally, the targeted rewrite via type irritation pattern is used to ensure that StreamInsight computations can leverage other LINQ providers as well. Consider the following example (this example depends on Interactive Extensions): var source = app.DefineEnumerable((int id) =>     EnumerableEx.Using(() =>         new NorthwindDataContext(), context =>             from p in context.Products             where p.ProductID == id             select p.ProductName)); Within the definition, StreamInsight has no reason to suspect that it ‘owns’ the Queryable.Where and Queryable.Select calls, and it can therefore defer to LINQ to SQL! Let’s use this source in the context of a StreamInsight process: var sink = app.DefineObserver(() => Observer.Create<string>(Console.WriteLine)); var query = from name in source(1).ToObservable()             where name == "Widget"             select name; using (query.Bind(sink).Run("process")) {     ... } When we run the binding, the source portion which filters on product ID and projects the product name is evaluated by SQL Server. Outside of the definition, responsibility for evaluation shifts to the StreamInsight server where we create a bridge to the Reactive Framework (using ToObservable) and evaluate an additional predicate. It’s incredibly easy to define computations that span multiple domains using these new features in StreamInsight 2.1! Regards, The StreamInsight Team

    Read the article

  • How can rotating release managers improve a project's velocity and stability?

    - by Yannis Rizos
    The Wikipedia article on Parrot VM includes this unreferenced claim: Core committers take turns producing releases in a revolving schedule, where no single committer is responsible for multiple releases in a row. This practice has improved the project's velocity and stability. Parrot's Release Manager role documentation doesn't offer any further insight into the process, and I couldn't find any reference for the claim. My first thoughts were that rotating release managers seems like a good idea, sharing the responsibility between as many people as possible, and having a certain degree of polyphony in releases. Is it, though? Rotating release managers has been proposed for Launchpad, and there were some interesting counterarguments: Release management is something that requires a good understanding of all parts of the code and the authority to make calls under pressure if issues come up during the release itself The less change we can have to the release process the better from an operational perspective Don't really want an engineer to have to learn all this stuff on the job as well as have other things to take care of (regular development responsibilities) Any change of timezones of the releases would need to be approved with the SAs and: I think this would be a great idea (mainly because of my lust for power), but I also think that there should be some way making sure that a release manager doesn't get overwhelmed if something disastrous happens during release week, maybe by have a deputy release manager at the same time (maybe just falling back to Francis or Kiko would be sufficient). The practice doesn't appear to be very common, and the counterarguments seem reasonalbe and convincing. I'm quite confused on how it would improve a project's velocity and stability, is there something I'm missing, or is this just a bad edit on the Wikipedia article? Worth noting that the top voted answer in the related "Is rotating the lead developer a good or bad idea?" question boldly notes: Don't rotate.

    Read the article

  • Automatic LaTex document generation from Excel spreadsheet

    - by Bowler
    I have some data in an excel file from which I have to generate a report. I repeat this task fairly regularly and am looking to automate it. I have a LaTeX project into which I usually just copy data by hand, export the necessary worksheets as pdfs and add them to my LaTeX project and compile with pdflatex. It has occured to me that there must be a way to automate this process. Is there an efficient way to export the data from excel and into a LaTeX project, possibly a vba script in excel could run the process? Also, it doesn't have to be LaTeX, I'm not all that experienced with MS office's more advanced features is there some way akin to a mail merge that I could achieve this with? In some ways this might be better in case I have to pass the work on to someone who doesn't know LaTeX. Thanks.

    Read the article

  • How to recover a MySQL InnoDB table?

    - by Kau-Boy
    When I try to launch the Plesk administration page of you server I get the following error: ERROR: PleskMainDBException MySQL query failed: MySQL server has gone away The MySQL Server is working well. Although it seems that the plesk database is somehow corrupt and any action on this database results in a restart of the mysql process, so even queries to other databases on the same MySQL server will be lost. If I try to connect to the plesk database using phpMyAdmin, I can only see the number of tables, the database had originally. But I am not able to open the tables listing. As soon as I try it, the mysql process crashes again. Trying to connect to the database using ssh works. I can even run a SELECT statement against any table an get a result. I don't know if it is an plesk error or an error of the psa database or even the MySQL server. Can you give me any tips on how to recover the plesk system. Should I try to repair the Plesk installation before. And if so, how can I do it and will all my settings get lost doing it? EDIT: Trying to dump the database, I get the following error: mysqldump: Got error: 2013: Lost connection to MySQL server during query when using LOCK TABLES EDIT: I could find out, that the table 'data_bases' is responsible for the crash of the MySQL server process. But trying to repair it using a REPAIR TABLE statement doesn't work. EDIT: I now dropped the whole database and restored it from a dump. But why I try to recover the data_bases table I get the following error: ERROR 1005 (HY000) at line 24: Can't create table './psa/data_bases.frm' (errno: 121) I am not able to create the table again. Somewhere in the MySQL system there is still some information about this table. I tested the same thing locally. If I just delete the table files and then try to create the table again I get the same error. If I drop the table through MySQL, I can create the table again afterwards. But trying to drop the table using MySQL the whole MySQL system crashed. Is there any way to solve that issue?

    Read the article

  • How to automatically resume php-fpm?

    - by alfish
    I am using nginx+php-fpm on Debian Squeeze for a busy server and have had great difficulty to deal with maximum connections being reached. Here the problem is that php processes sometimes just die randomly under high load and leave the server with no php process. Then I need to manually restart php5-fpm service to bring back the server to life. I am wondering how to avoid this to happen, or at least treat the symptoms by restarting the php5-fpm automatically whenever there is not php process left to listen to incoming requests. My relevant configs are: pm = dynamic pm.max_children = 1400 pm.start_servers = 10 pm.max_spare_servers = 20 pm.process_idle_timeout = 1s; #not sure it will be useful when pm=dynamic pm.max_requests = 100000 request_terminate_timeout = 30 I appreciate your suggestions to cope with this nasty problem.

    Read the article

  • Implications of renaming sql server 2000 instance

    - by peg_leg
    I have a sql server 2000 of which I want to replicate the data to a sql server 2008. However the output from select @@servername and select serverproperty('servername') on the 2000 server are different and that prevents replication. There is a process to resolve this at http://support.microsoft.com/kb/818334 . Has anyone done this? What implications are there for following this process? Any 'gotchas' that need to be guarded against? My 2000 server is a lone production server...very scary. Please advise.

    Read the article

  • Good set of web hosting permissions?

    - by Jorge Israel Peña
    Hey guys, I just got a linode and I'm in the process of configuring it. It's running nginx with php-fpm and passenger. nginx was compiled and is running as user nginx. php-fpm (php with fastcgi process manager) is running as www-data (in group www-data). My sites are currently in /var/www, so for example /var/www/test.com I'm just wondering what the general 'flow' of things is. So for example, /var/www is owned by root, should I chown of /var/www/test.com to nginx or www-data? Or should I put nginx in the www-data group? How should site uploading work, I just transfer files to the /var/www/test.com directory as root (sudo) and then chown -R www-data:www-data .? Thanks. I'm capable of figuring things out on my own, I'm just wondering what the typical/general way of handling users/groups/permissions/site-files is on linux with a webserver.

    Read the article

  • SSAS Compare version 1.0 released

    - by Red Gate Software BI Tools Team
    We’re pleased to announce that SSAS Compare version 1.0 has been released as a free tool. Version 1.0 includes: Comparisons of live databases and XMLA or Analysis Services Project files MDX syntax diffs and highlighting Server comparisons Deployment wizard with summaries of scripted actions Bug fixes and engine and UI refinements We’ve tested it on as many cube configurations as we could find (not just good old AdventureWorks!), but we can’t provide support for free tools — so if you’re reliant on SSAS Compare for your cube deployment, use it at your own risk. See the user license agreement in the installer for more details. SSAS Compare’s come a long way from its humble beginnings as an internal tool first developed for Red Gate’s own BI developers. Today’s SSAS Compare is now much more stable — not to mention much easier to use — and something the team is proud to have released with Red Gate’s name on. Next: Deployment Manager We’re working on integrating SSAS Compare cube deployment with our new Deployment Manager tool, so you’ll be able to create cube deployment scripts and automate the deployment process, too.  We’re documenting the process in a white paper we’ll publish online in the next week. Thank you! Thanks to all the SSAS Compare users out there. Without your feedback, we could never have produced such a stable product so quickly. We hope you continue to find useful. See you in Deployment Manager!  

    Read the article

  • Command+tab, expose and many other functions stop working after screen sharing

    - by moshen
    When I'm away from my office mac I typically login via screen-sharing or VNC from home/another mac. Lately, after using screen sharing this way, my office mac will have several problems: Command+tab and Command+` no longer work Expose and other F-key functions no longer work ctrl+space to launch Google QSB no longer works Things I have tried to remedy this: Restarting the Finder process Restarting the Dock process Disabling the screensaver Unfortunately, the screensaver still runs... A connected issue? Deleting preference files for Dock/Finder/screensaver etc. The only thing that seems to work is a restart. I usually try and avoid that. System details: Macbook pro 13" OS v10.5.8

    Read the article

  • Media center consumes all available memory when attempting to play music off of a server

    - by RCIX
    I have Windows 7 Ultimate, and recently, when i try to play a song off of my Twonky Media Server/Windows Media Connect (based on an HP WHS with an Atom), it plays choppily. When i open Resource Monitor, it shows that after ordering the music to play, memory usage rapidly spikes to consume most, if not all, of the available memory on my system (excluding a couple hundred megabytes in standby). Why does it do this and is there anything i can do to stop it? Edit: it happens when I attempt to browse the server's music, not just when i play music. Edit 2: the "ehshell" process is what consumes the memory, appears to me something specific to media center. Moreover, the ehshell process doesn't die in this case. Edit 3: It only happens when browsing my Twonky library, and not my Windows Media Connect.

    Read the article

  • Issue with Toshiba Satellite C50 A-1JM

    - by Nathan Hawkes
    I am having some odd problems with my Toshiba laptop. For the last two days, it has not loaded past the boot screen for hours, if at all. When I force the laptop off and switch it back on, it goes to the Preparing Automatic Repair process but will not complete the process and go to the desktop. However, this only happens when the battery is plugged in. When I remove the battery and run the laptop on power, it works without issue. Would this be a battery issue (given it works without the battery) or a hard drive issue (given it won't get past the boot screen)? If neither, what would you suggest as a solution?

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Kathryn Perry
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world.The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3DMeg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP."The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions."Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform."With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps.Continue reading the full article here.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • Modifying the initrd.img to run additional binaries in a PXE booted RHEL 6

    - by Charles Long
    I am trying to add additional automation to our existing RHEL 6 (or Oralce's implementation thereof) PXE install process by running a script in the %pre section of my kickstart config that call hpacucli, HP's raid device configuration binary. My approach has been to modify the PXE served initrd.img. I've unpacked the initrd.img and copied in the required libraries, binaries, and scripts but when the system boots using the modified initrd.img, the modified /lib (and /lib_64) are moved aside to /lib_old and /lib is linked to the /mnt/runtime/lib. How can I change this configuration so that the /lib is not moved (unlikely) or required libraries are available in the runtime /mnt/runtime/lib? To test and confirm this I've been able to get the install process to move to the 6th virtual console, which allows me to see errors, and then open a shell (a useful debugging mechanism). %pre exec /dev/tty6 2 /dev/tty6 chvt 6 /bin/sh

    Read the article

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

  • failure daemon and changing pid number

    - by Alessandra Bilardi
    proftpd, sshd and apache processes run with /etc/init.d/its-script on linux distro. I was monitoring 21, 22 and 80 ports with farm monitoring service: every 5 minutes service check each port and notify only failure. The failures were 5-6 times on 24h. It seems that someone kicks the switch sometimes.. I add monit and collectd monitoring and the monitoring about 21, 22 and 80 ports is every 1 minute. I do not receive farm monitoring service notify. I receive only monit notify about failure and/or succeed/changing pid number of proftpd, sshd or apache process. The failures are still 5-6 times on 24h. collectd monitoing about cpu, load average and each process is regular and there are no peaks. There is nothing kicks the switch but there is something which determines failure monitoring. is it a simple interference or is it indicative of some abnormality? What could cause these failures?

    Read the article

  • Windows 8 mail cause event 529 when connect to exchange

    - by holian
    I set my company exchange mailbox in Windows 8.1 mail. (outsite). Everything works fine, but after i start the Windows 8.1 mail i get event with id 529 in the security log continously. Reason: Unknown user name or bad password Username: [email protected] range: Type of login: 8 Logon Process: Advapi Authentication Package: Negotiate Workstation Name: SERVERNAME Caller User Name: SERVERNAME $ Calling range BAR NUL Caller Logon ID: (0x0, 0x3E7) Caller Process ID: 4384 Transmitted services: - Source Network Address: 56.43.213.122 Source Port: 55 698 If i close windows mail, events stop flooding the security log in the server. Connection parameters in windwos 8: email:[email protected] password domain:company.local username:myemail server:mydomain.dyndns.org SSL:yes. Any idea whats the problem? I can check my mail, with the same setting on my android phone without any problem. Thank you

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >