Search Results

Search found 34162 results on 1367 pages for 'oracle products distributed document capture'.

Page 347/1367 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Master-slave vs. peer-to-peer archictecture: benefits and problems

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Almost two decades ago, I was a member of a database development team that introduced adaptive locking. Locking, the most popular concurrency control technique in database systems, is pessimistic. Locking ensures that two or more conflicting operations on the same data item don’t “trample” on each other’s toes, resulting in data corruption. In a nutshell, here’s the issue we were trying to address. In everyday life, traffic lights serve the same purpose. They ensure that traffic flows smoothly and when everyone follows the rules, there are no accidents at intersections. As I mentioned earlier, the problem with typical locking protocols is that they are pessimistic. Regardless of whether there is another conflicting operation in the system or not, you have to hold a lock! Acquiring and releasing locks can be quite expensive, depending on how many objects the transaction touches. Every transaction has to pay this penalty. To use the earlier traffic light analogy, if you have ever waited at a red light in the middle of nowhere with no one on the road, wondering why you need to wait when there’s clearly no danger of a collision, you know what I mean. The adaptive locking scheme that we invented was able to minimize the number of locks that a transaction held, by detecting whether there were one or more transactions that needed conflicting eyou could get by without holding any lock at all. In many “well-behaved” workloads, there are few conflicts, so this optimization is a huge win. If, on the other hand, there are many concurrent, conflicting requests, the algorithm gracefully degrades to the “normal” behavior with minimal cost. We were able to reduce the number of lock requests per TPC-B transaction from 178 requests down to 2! Wow! This is a dramatic improvement in concurrency as well as transaction latency. The lesson from this exercise was that if you can identify the common scenario and optimize for that case so that only the uncommon scenarios are more expensive, you can make dramatic improvements in performance without sacrificing correctness. So how does this relate to the architecture and design of some of the modern NoSQL systems? NoSQL systems can be broadly classified as master-slave sharded, or peer-to-peer sharded systems. NoSQL systems with a peer-to-peer architecture have an interesting way of handling changes. Whenever an item is changed, the client (or an intermediary) propagates the changes synchronously or asynchronously to multiple copies (for availability) of the data. Since the change can be propagated asynchronously, during some interval in time, it will be the case that some copies have received the update, and others haven’t. What happens if someone tries to read the item during this interval? The client in a peer-to-peer system will fetch the same item from multiple copies and compare them to each other. If they’re all the same, then every copy that was queried has the same (and up-to-date) value of the data item, so all’s good. If not, then the system provides a mechanism to reconcile the discrepancy and to update stale copies. So what’s the problem with this? There are two major issues: First, IT’S HORRIBLY PESSIMISTIC because, in the common case, it is unlikely that the same data item will be updated and read from different locations at around the same time! For every read operation, you have to read from multiple copies. That’s a pretty expensive, especially if the data are stored in multiple geographically separate locations and network latencies are high. Second, if the copies are not all the same, the application has to reconcile the differences and propagate the correct value to the out-dated copies. This means that the application program has to handle discrepancies in the different versions of the data item and resolve the issue (which can further add to cost and operation latency). Resolving discrepancies is only one part of the problem. What if the same data item was updated independently on two different nodes (copies)? In that case, due to the asynchronous nature of change propagation, you might land up with different versions of the data item in different copies. In this case, the application program also has to resolve conflicts and then propagate the correct value to the copies that are out-dated or have incorrect versions. This can get really complicated. My hunch is that there are many peer-to-peer-based applications that don’t handle this correctly, and worse, don’t even know it. Imagine have 100s of millions of records in your database – how can you tell whether a particular data item is incorrect or out of date? And what price are you willing to pay for ensuring that the data can be trusted? Multiple network messages per read request? Discrepancy and conflict resolution logic in the application, and potentially, additional messages? All this overhead, when all you were trying to do was to read a data item. Wouldn’t it be simpler to avoid this problem in the first place? Master-slave architectures like the Oracle NoSQL Database handles this very elegantly. A change to a data item is always sent to the master copy. Consequently, the master copy always has the most current and authoritative version of the data item. The master is also responsible for propagating the change to the other copies (for availability and read scalability). Client drivers are aware of master copies and replicas, and client drivers are also aware of the “currency” of a replica. In other words, each NoSQL Database client knows how stale a replica is. This vastly simplifies the job of the application developer. If the application needs the most current version of the data item, the client driver will automatically route the request to the master copy. If the application is willing to tolerate some staleness of data (e.g. a version that is no more than 1 second out of date), the client can easily determine which replica (or set of replicas) can satisfy the request, and route the request to the most efficient copy. This results in a dramatic simplification in application logic and also minimizes network requests (the driver will only send the request to exactl the right replica, not many). So, back to my original point. A well designed and well architected system minimizes or eliminates unnecessary overhead and avoids pessimistic algorithms wherever possible in order to deliver a highly efficient and high performance system. If you’ve every programmed an Oracle NoSQL Database application, you’ll know the difference! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • What is the best shopping cart or implementation for unlimited users posting unlimited products? [closed]

    - by Matt
    I've been working with x-cart much lately, and I was thinking about using it for a much larger site, but I don't know if it can handle what I'm looking for. I need a platform or strategy that can allow for as many users as possible where each can post multiple products (hopeful up to a hundred, but that's less important), but in their own private catalogs. So what am I looking for? With x-cart, I'm used to customizing it with jquery, smarty, and php, so I can handle that much.

    Read the article

  • A couple of links to our products and 10 pages of crack/keygen/torrent/etc.

    - by devdept
    If you try searching for our company and product name you'll get two useful links and 10 pages of hacker sites where eventually you can download the cracked version of our products. How can we clean hacker links and leave only useful links to our prouct pages? We already checked the Google URL Removal Tool but within the 'Removal Type' options we can specify there is nothing meaningful to specify in this case. Shall we proceed the same? Thanks.

    Read the article

  • iPhone: How to Display Text from UIWebView HTML Document in a UITextView

    - by ArgMan
    I have an RSS feed that gets arranged in a UITableView which lets the user select a story that loads in a UIWebView. However, I'd like to stop using the UIWebView and just use a UITextView or UILabel. This png is what I am trying to do (just display the various text aspects of a news story): I have tried using: NSString *myText = [webView stringByEvaluatingJavaScriptFromString:@"document.documentElement.textContent"]; and assigning the string to a UILabel but it doesn't work from where I am implementing it in webViewDidFinishLoad (--is that not the proper place?). I get a blank textView and normal webView. If I overlay a UITextView on top of a UIWebView on its own (that is, a webView that just loads one page), the code posted above works displays the text fine. The problem arises when I try to process the RSS feed . I've been stuck wondering why this doesn't work as it should for a few days now. If you have a better, more efficient way of doing it then placing the code in webViewDidFinishLoad, please let me know! Does it go in my didSelectRowAtIndexPath? Thank you very much in advance!

    Read the article

  • jquery load and tinymce gives me a "g.win.document is null" on subsequent loads

    - by Patrice Peyre
    So... very excited as first post on stackoverflow I have this page, clicking a button will call editIEPProgram(). editIEPProgram counts how many specific divs exists, adds another div, loads it with html from '/content/mentoring/iep_program.php' and then calls $("textarea.tinymce").tinymce to add a rich text editor to the newly added text boxes. It finally adds a tab which handles the new stuff and bob is your uncle. It ALMOST works. Clicking the button the first adds everything and everything woks, on the second and subsequent click, everything is added but the tinymce fails to initiate (I guess) and gives me "g.win.document is null". I have been on it for some time now and losing all my sanity. please, help.me. function editIEPProgram(){ var index = $("div[id^=iep_program]").size(); var target_div = "iep_program_"+index; $("#program_container") .append( $("<div></div>") .attr("id", target_div) .addClass("no_overflow_control") ); $("#"+target_div).load ( "/content/mentoring/iep_program.php", {index:index}, function() { $("textarea.tinymce").tinymce({ script_url: "/scripts/tiny_mce.js",height:"60", theme:"advanced",skin:"o2k7",skin_variant : "silver", plugins : "spellchecker,advlist,preview,pagebreak,style,layer,save,advhr,advimage,advlink,searchreplace,paste,noneditable,visualchars,nonbreaking,xhtmlxtras", theme_advanced_buttons1 : "spellchecker,|,bold,italic,underline,strikethrough,|,formatselect,|,forecolor,backcolor,|,bullist,numlist,|,outdent,indent,|,undo,redo,|,link,unlink,image,cleanup,preview", theme_advanced_buttons2 : "", theme_advanced_buttons3 : "", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", content_css : "/css/main.css" }); }); $("#tab_container") .tabs("add", "#"+target_div,"New program") .tabs("select", $("#tab_container").tabs("length")-1); }

    Read the article

  • Castle ActiveRecord "Could not compile the mapping document: (string)"

    - by Nick
    Hi I am having getting an exception when trying to initialize ActiveRecord and I cannot figure out what I am missing. I am trying to convince the company I work for to use Castle ActiveRecord and it won't look good if I can't demonstrate how it works. I have work on projects before with Castle ActiveRecord and I had never experience this problem before. Thanks for your help The exception that I get is Stack Trace: at Castle.ActiveRecord.ActiveRecordStarter.AddXmlString(Configuration config, String xml, ActiveRecordModel model) at Castle.ActiveRecord.ActiveRecordStarter.AddXmlToNHibernateCfg(ISessionFactoryHolder holder, ActiveRecordModelCollection models) at Castle.ActiveRecord.ActiveRecordStarter.RegisterTypes(ISessionFactoryHolder holder, IConfigurationSource source, IEnumerable`1 types, Boolean ignoreProblematicTypes) at Castle.ActiveRecord.ActiveRecordStarter.Initialize(IConfigurationSource source, Type[] types) at ConsoleApplication1.Program.Main(String[] args) in C:\Projects\CastleDemo\ConsoleApplication1\Program.cs:line 20 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() Inner Exception: {"Could not compile the mapping document: (string)"} Below is my configuration file: <add key="connection.driver_class" value="NHibernate.Driver.SqlClientDriver" /> <add key="dialect" value="NHibernate.Dialect.MsSql2000Dialect" /> <add key="connection.provider" value="NHibernate.Connection.DriverConnectionProvider" /> <add key="connection.connection_string" value="Data Source=SPIROS\SQLX;Initial Catalog=CastleDemo;Integrated Security=SSPI" /> <add key="proxyfactory.factory_class" value="NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle" /> and this is the main method that runs the initialization: static void Main(string[] args) { //Configure ActiveRecord source XmlConfigurationSource source = new XmlConfigurationSource("../../config.xml"); // //Initialazi ActiveRecord ActiveRecordStarter.Initialize( source, typeof(Product)); // //Create Schema ActiveRecordStarter.CreateSchema(); // }

    Read the article

  • SSIS package randomly hangs during execution

    - by Adam MacLeod
    Hi Guys, I am having an ongoing and painful problem with an SSIS package. The package runs every 5 minutes as an SQL Agent Job and every 2-10 days the package will start running and never stop (thus preventing further executions). If I stop the hung job manually it will begin working perfectly again in the next 5 minute interval. The SSIS package is for moving data from an Oracle database to a MSSQL 2005 database. It has 7 steps: Step 1 calls an Oracle Stored Procedure to prepare the temporary tables inside ORACLE Steps 2-6 process the data from the ORACLE tables to the MSSQL tables ORACLE - MSSQL Step 7 calls an Oracle Stored Procedure to clear the ORACLE temporary tables I suspect that the issue is caused by a communications error between the MSSQL server and the ORACLE server. Both the MSSQL database and Agent/package run on one machine with the ORACLE database running over the network. I have enabled logging of the SQL package and after more than 2GB of log file I have captured the instant where the package stops responding: OnPreValidate,ADV-SRV5,NT AUTHORITY\SYSTEM,CallistaIntegrationToMonashCRM_delta,{F88F6C45-CFA2-4801-A2F2-DDF03D458A48},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,(null) OnPreValidate,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,(null) OnInformation,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,1074016266,0x,Validation phase is beginning. OnProgress,ADV-SRV5,NT AUTHORITY\SYSTEM,Address,{c5907799-f918-43da-818a-d4bd7f188367},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,Validating Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_pre: The object is ready to make the following external request: 'IDataInitialize::GetDataSource'. Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_post: 'IDataInitialize::GetDataSource succeeded'. The external request has completed. Diagnostic,ADV-SRV5,NT AUTHORITY\SYSTEM,Callista,{cb5d6fe3-3ea4-4453-8e5a-965818021df7},{3A1FB1E3-B76D-444D-876B-D1FBBB9BA246},6/06/2010 10:15:01 AM,6/06/2010 10:15:01 AM,0,0x,ExternalRequest_pre: The object is ready to make the following external request: 'IDBInitialize::Initialize'. These messages show the entire log generated for the failed run, for a successful run the output is typically ~2500 lines. I can see that the package is hanging during the initialize operation on the Callista connection (ORACLE database). I have not been able to work out a way to either fix this issue or have the package die gracefully (an error to the log would be A-OK with me). Any help or advice would be greatly appreciated.

    Read the article

  • Add KO "data-bind" attribute on $(document).ready

    - by M.Babcock
    Preface I've rarely ever been a JS developer and this is my first attempt at doing something with Knockout.js. The question to follow likely illustrates both points. Backgound I have a fairly complex MVC3 application that I'm trying to get to work with KO (v2.0.0.0). My MVC app is designed to generically control which fields appear in the view (and how they are added to the view). It makes use of partial views to decide what to draw in the view based on the user's permissions (If the user is in group A then show control A, if the user in group B then show control B or possibly if the user is in group A don't include the control at all). Also, my model is very flat so I'm not sure the built-in ability to apply my ViewModel to a specific portion of the view will help. My solution to this problem is to provide an action in my controller that responds with an object in JSON format with that contains the JQuery selector and the content to assign to the "data-bind" attribute and bind the ViewModel to the View in the $(document).ready event using the values provided. Failed Proof-of-concept My first attempt at proving that this works doesn't actually seem to work, and by "doesn't work" I mean it just doesn't bind the values at all (as can be seen in this jsfiddle). I've tried it with the applyBindings inside of the ready event and not, but it doesn't seem to make any bit of difference. Question What am I doing wrong? Or is this just not something that can work with KO (though I've seen at least one example online doing the same thing and it supposedly works)? Like I said in the preface, I've only ever pretended to be a JS developer (though I've generally gotten it to work in the past) so I'm at a loss where to start trying to figure out what I'm doing wrong. Hopefully this isn't a real noob question.

    Read the article

  • How do I align ReSharpers "cleanup code" with Visual Studio's "format document"

    - by Thomas Jespersen
    I'm a big fan of ReSharpers "cleanup code" feature. Especially the Solution wide clean up. But I use Visual Studio's Ctrl+K+D (Format document), it formats the code slightly differed than ReSharper. I'm on a quest to align ReSharper with Visual Studio (not the other way... because you can not share Visual Studio settings in the solution/source control system). So I'm after something like this: <Configuration> <CodeStyleSettings> <Sharing>SOLUTION</Sharing> <CSharp> <FormatSettings> <SPACE_AROUND_MULTIPLICATIVE_OP>True</SPACE_AROUND_MULTIPLICATIVE_OP> <SPACE_BEFORE_TYPEOF_PARENTHESES>False</SPACE_BEFORE_TYPEOF_PARENTHESES> </FormatSettings> </CSharp> </CodeStyleSettings> </Configuration> Which other settings will help ReSharper format code like Visual Studio?

    Read the article

  • XML document being parsed as single element instead of sequence of nodes

    - by Rob Carr
    Given xml that looks like this: <Store> <foo> <book> <isbn>123456</isbn> </book> <title>XYZ</title> <checkout>no</checkout> </foo> <bar> <book> <isbn>7890</isbn> </book> <title>XYZ2</title> <checkout>yes</checkout> </bar> </Store> I am getting this as my parsed xmldoc: >>> from xml.dom import minidom >>> xmldoc = minidom.parse('bar.xml') >>> xmldoc.toxml() u'<?xml version="1.0" ?><Store>\n<foo>\n<book>\n<isbn>123456</isbn>\n</book>\n<t itle>XYZ</title>\n<checkout>no</checkout>\n</foo>\n<bar>\n<book>\n<isbn>7890</is bn>\n</book>\n<title>XYZ2</title>\n<checkout>yes</checkout>\n</bar>\n</Store>' Is there an easy way to pre-process this document so that when it is parsed, it isn't parsed as a single xml element?

    Read the article

  • XmlSlurper/NekoHTML document fragment parsing - No HTML or BODY tags wanted

    - by Misha Koshelev
    Dear All, I am trying to parse the following HTML fragment, and I would like to get the same fragment as output (without HTML and BODY tags). Is this possible? If so, how? Thank you Misha p.s. I am reading here: http://nekohtml.sourceforge.net/faq.html#fragments and I believe I have added the correct options below. However, the output is still incorrect :( Thank you Misha import groovy.xml.MarkupBuilder import groovy.xml.StreamingMarkupBuilder import groovy.util.XmlNodePrinter import groovy.util.slurpersupport.NodeChild def text=""" <div><h2>Test</h2> <div>Hi</div> </div> """ // Parse def config=new org.cyberneko.html.HTMLConfiguration() config.setFeature("http://cyberneko.org/html/features/balance-tags/document-fragment",true) def html=new XmlSlurper(new org.cyberneko.html.parsers.SAXParser()).parseText(text) // Output def printNode(NodeChild node) { def writer = new StringWriter() writer << new StreamingMarkupBuilder().bind { mkp.declareNamespace('':node[0].namespaceURI()) mkp.yield node } new XmlNodePrinter().print(new XmlParser().parseText(writer.toString())) } printNode(html) Output: <HTML> <tag0:HEAD xmlns:tag0="http://www.w3.org/1999/xhtml"/> <BODY> <DIV> <H2> Test </H2> <DIV> Hi </DIV> </DIV> </BODY> </HTML>

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • What electronic scrum/kanban board do you use and recommend for distributed teams?

    - by Derick Bailey
    I have a coworker on a team that is fairly distributed, fairly large (for our company) and wants to take advantage of visual management tools like scrum / kanban boards. Since they are a somewhat distributed team, though, all of the issue management / work management must be done via an electronic tool (we currently use Trac). What issue / work management tools, with a visualization of a scrum / kanban board, do you use for your distributed scrum / kanban teams? would you recommend it, and if so, why?

    Read the article

  • Exporting data from php page to word document

    - by udaya
    Hi I exported data from php page to word document but the problm is the header is not available in all pages I got the same problem while i am exporting the datas to pdf but i got the result for that one by using fpdf library In pdf i got the results like this ex page1 slno name 1 udaya 2 sankar In page 2 slno name 3 chendu 4 Akila I want the same kind of result in word how to get that This is the function i used function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "word") { $this-load-library('table'); $data['countrytoword'] = $this-AddEditmodel1-export(); $this-table-set_heading('Name','Country','State','Town'); $out = $this-table-generate($data['countrytoword']); header("Content-Type: application/vnd.ms-word"); header("Expires: 0"); header("Cache-Control: must-revalidate, post-check=0, pre-check=0"); header("Content-disposition: attachment; filename=$cur_date.doc"); echo ''; echo 'CountryList'; print_r($out); } } Name Country State Town

    Read the article

  • decompressing .gZ file from Document directory?

    - by senthilmuthu
    hi, i am having .gZ (zip file) in document directory.i want unZip it.but i am using libz.dylib framework .will it decompress and save all data to that file path?how can i get that extracted data?any has experienced in doing this?any help?when i use the method,but when i put break point, it returns data error(used in NSLog)--Z_DATA_ERROR-- - (id)initWithGzippedData: (NSData *)gzippedData; { [gzippedData retain]; if ([gzippedData length] == 0) return nil; unsigned full_length = [gzippedData length]; unsigned half_length = [gzippedData length] / 2; NSMutableData *decompressed = [[NSMutableData alloc] initWithLength:(full_length + half_length)]; BOOL done = NO; int status; z_stream strm; strm.next_in = (Bytef *)[gzippedData bytes]; strm.avail_in = [gzippedData length]; strm.total_out = 0; strm.zalloc = Z_NULL; strm.zfree = Z_NULL; if (inflateInit2(&strm, (15+32)) != Z_OK) { [gzippedData release]; [decompressed release]; return nil; } while (!done) { // Make sure we have enough room and reset the lengths. if (strm.total_out >= [decompressed length]) [decompressed increaseLengthBy: half_length]; strm.next_out = [decompressed mutableBytes] + strm.total_out; strm.avail_out = [decompressed length] - strm.total_out; // Inflate another chunk. status = inflate (&strm, Z_SYNC_FLUSH); if(status == Z_DATA_ERROR) { NSLog(@"data error"); } if (status == Z_STREAM_END) done = YES; else if (status != Z_OK) break; } if (inflateEnd (&strm) != Z_OK) { [decompressed release]; return nil; } // Set real length. [decompressed setLength: strm.total_out]; id newObject = [self initWithBytes:[decompressed bytes] length:[decompressed length]]; [decompressed release]; [gzippedData release]; return newObject; }

    Read the article

  • Linq to XML Document Traversal

    - by Perpetualcoder
    I have an xml document like this: <?xml version="1.0" encoding="utf-8" ?> <demographics> <country id="1" value="USA"> <state id ="1" value="California"> <city>Long Beach</city> <city>Los Angeles</city> <city>San Diego</city> </state> <state id ="2" value="Arizona"> <city>Tucson</city> <city>Phoenix</city> <city>Tempe</city> </state> </country> <country id="2" value="Mexico"> <state id ="1" value="Baja California"> <city>Tijuana</city> <city>Rosarito</city> </state> </country> </demographics> How do I setup LINQ queries for doing things like: 1. Get All Countries 2. Get All States in a Country 3. Get All Cities inside a state of a paricular country ? I gave it a try and I am kind of confused when to use Elements["NodeName"] and Descendants etc. I know I am not the brightest XML guy around. Is the format of the XML file even correct for simple traversal?

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • Is there something like "if not exist create sequence ..." in Oracle SQL?

    - by Timo
    Very probably a noob question: For my application that uses an Oracle 8 DB, I am providing an SQL script to setup stuff like triggers, sequences etc., which can be copied and pasted into SQL*Plus. I would like the script to not stop with an error if a sequence that I am trying to create already exists. For a Trigger this can easily be done using "create or replace trigger ...", but for a sequence this does not work. Is there some alternative, like "if not exists mysequence then create sequence ..." (I tried this but it did not work :) ) Alternatively, if this is not possible, is there a way to do a "drop sequence mysequence" without SQL*Plus aborting the script if mysequence does not exist? Thanks. PS: I wish Oracle just had a simple Autoinc field type ... sigh.

    Read the article

  • What eletronic scrum/kanban board do you use and recommend for distributed teams?

    - by Derick Bailey
    I have a coworker on a team that is fairly distributed, fairly large (for our company) and wants to take advantage of visual management tools like scrum / kanban boards. Since they are a somewhat distributed team, though, all of the issue management / work management must be done via an electronic tool (we currently use Trac). What issue / work management tools, with a visualization of a scrum / kanban board, do you use for your distributed scrum / kanban teams? would you recommend it, and if so, why? Thanks.

    Read the article

  • using jQuery to load the body of another HTML document between entries

    - by justin hall
    I'm trying to use jQuery to load the body of another HTML document, which contains our AdSense banner. It should display under each blog entry when in list view. I'm able to get it to load text, even images, but not the banner; the banner is a small script. Here is the website we are working with: http://neverknowtech.com/data/ (that's a test page, and the 'entry' displayed there contains the intended body content) As you can see here the body code does include an ad, and the word 'Ad'. The word 'Ad' is shown correctly below the post (as it is in list view) but the script for the Google Ad doesn't seem to make it. Here is what we have in the footer currently (replace [ with < and ] with ): [script type="text/javascript"] var classSelector = ":nth-child(1n)"; if ($(".list-journal-entry-wrapper .journal-entry-wrapper").length ] 0) { $('.list-journal-entry-wrapper .journal-entry-wrapper' + classSelector).after('[div class="journal-list-ad-insert"][/div]'); $('.list-journal-entry-wrapper .journal-list-ad-insert').load("/data/journal-list-ad-insert.html .body"); } [/script] note: the nth-child is there for when they decide how frequently to place the ads.

    Read the article

  • Send document to printer from web page

    - by FiveTools
    I have a webpage that activates a print job on a printer. This works in the localhost environment but does not work when the application is deployed to the webserver. I'm using the PrintDocument class from the .net System.Drawing.Print namespace. I'm now assuming the printer has to be available to the application on the remote server? Any suggestions on how I would get this to work? PrintDocument pd = new PrintDocument(); PaperSource ps = new PaperSource(); pd.DefaultPageSettings.PaperSize = new System.Drawing.Printing.PaperSize("Custom", 1180, 850); pd.PrintPage += new PrintPageEventHandler (this.pd_PrintPage); // Set your printer's name. Obtain from // System's Printer Dialog Box. pd.PrinterSettings.PrinterName = "Okidata ML 321 Turbo/D (IBM)"; //PrintPreviewDialog dlgPrintPvw = new PrintPreviewDialog(); //dlgPrintPvw.Document = pd; //dlgPrintPvw.Focus(); //dlgPrintPvw.ShowDialog(); pd.Print();

    Read the article

  • How to force oracle to use index range scan?

    - by wsb3383
    Hi, all. I have a series of extremely similar queries that I run against a table of 1.4 billion records (with indexes), the only problem is that at least 10% of those queries take 100x more time to execute than others. I ran an explain plan and noticed that the for the fast queries (roughly 90%) Oracle is using an index range scan (on my created), while on the slow one, it's using a full index scan. Is there a way to force Oracle to do a an index range scan? Thanks!

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >