Search Results

Search found 6104 results on 245 pages for 'fast esp'.

Page 63/245 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • Looking for a real-world example illustrating that composition can be superior to inheritance

    - by Job
    I watched a bunch of lectures on Clojure and functional programming by Rich Hickey as well as some of the SICP lectures, and I am sold on many concepts of functional programming. I incorporated some of them into my C# code at a previous job, and luckily it was easy to write C# code in a more functional style. At my new job we use Python and multiple inheritance is all the rage. My co-workers are very smart but they have to produce code fast given the nature of the company. I am learning both the tools and the codebase, but the architecture itself slows me down as well. I have not written the existing class hierarchy (neither would I be able to remember everything about it), and so, when I started adding a fairly small feature, I realized that I had to read a lot of code in the process. At the surface the code is neatly organized and split into small functions/methods and not copy-paste-repetitive, but the flip side of being not repetitive is that there is some magic functionality hidden somewhere in the hierarchy chain that magically glues things together and does work on my behalf, but it is very hard to find and follow. I had to fire up a profiler and run it through several examples and plot the execution graph as well as step through a debugger a few times, search the code for some substring and just read pages at the time. I am pretty sure that once I am done, my resulting code will be short and neatly organized, and yet not very readable. What I write feels declarative, as if I was writing an XML file that drives some other magic engine, except that there is no clear documentation on what the XML should look like and what the engine does except for the existing examples that I can read as well as the source code for the 'engine'. There has got to be a better way. IMO using composition over inheritance can help quite a bit. That way the computation will be linear rather than jumping all over the hierarchy tree. Whenever the functionality does not quite fit into an inheritance model, it will need to be mangled to fit in, or the entire inheritance hierarchy will need to be refactored/rebalanced, sort of like an unbalanced binary tree needs reshuffling from time to time in order to improve the average seek time. As I mentioned before, my co-workers are very smart; they just have been doing things a certain way and probably have an ability to hold a lot of unrelated crap in their head at once. I want to convince them to give composition and functional as opposed to OOP approach a try. To do that, I need to find some very good material. I do not think that a SCIP lecture or one by Rich Hickey will do - I am afraid it will be flagged down as too academic. Then, simple examples of Dog and Frog and AddressBook classes do not really connivence one way or the other - they show how inheritance can be converted to composition but not why it is truly and objectively better. What I am looking for is some real-world example of code that has been written with a lot of inheritance, then hit a wall and re-written in a different style that uses composition. Perhaps there is a blog or a chapter. I am looking for something that can summarize and illustrate the sort of pain that I am going through. I already have been throwing the phrase "composition over inheritance" around, but it was not received as enthusiastically as I had hoped. I do not want to be perceived as a new guy who likes to complain and bash existing code while looking for a perfect approach while not contributing fast enough. At the same time, my gut is convinced that inheritance is often the instrument of evil and I want to show a better way in a near future. Have you stumbled upon any great resources that can help me?

    Read the article

  • Faster Trip to Innovation with Simplified Data Integration: Sabre Holdings Case Study

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Author: Irem Radzik, Director of Product Marketing, Data Integration, Oracle In today’s fast-paced, competitive environment, IT teams are under pressure to deliver technology solutions for many critical business initiatives as fast as possible. When the focus is on speed, it can be easy to continue to use old style, point-to-point custom scripts that grow organically to the point where they are unmanageable and too costly to maintain. As data volumes, data sources, and end users grow, uncoordinated data integration efforts create significant inefficiencies for both IT and business users. In addition to losing IT productivity due to maintaining spaghetti architecture, data integrity becomes a concern as well. Errors caused by inconsistent, data and manual data entry can prove very costly for companies and disrupt business activities. Many industry leaders recognize now that data should be moved in an automated and reliable manner across all platforms to have one version of the truth. By simplifying their data integration architecture and standardizing on a centralized approach, IT teams now accelerate time to market. Especially, using a centralized, shared-service approach brings agility, increases IT productivity, and frees up resources for innovation. One such industry leader that simplified its data integration architecture is Sabre Holdings. Sabre Holdings provides distribution and technology solutions for the travel industry, and is a winner of Oracle Excellence Awards for Fusion Middleware in 2011 in the data integration category. I had the pleasure to host Sabre Holdings on a public webcast and discuss their data integration best practices for data warehousing. In this webcast Sabre’s Amjad Saeed, presented how the company reduced complexity by consolidating systems and standardizing development on Oracle Data Integrator and Oracle GoldenGate for its global data warehouse development team. With Oracle’s complete real-time data integration solution, Sabre also streamlined support and maintenance operations, achieved real-time view in the execution of the integration processes, and can manage the data warehouse and business intelligence solution performance on demand. By reducing complexity and leveraging timely market insights, the company was able to decrease time to market by 40%. You can now listen to the webcast on demand: Sabre Holdings Case Study: Accelerating Innovation using Oracle Data Integration I invite you to hear directly from Sabre how to use advanced data integration capabilities to enable accelerated innovation. To learn more about Oracle’s data integration offering you can download our free resources.

    Read the article

  • How Does a 724% Return on Your Salesforce Automation Investment Sound?

    - by Brian Dayton
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Oracle Sales Cloud and Marketing Cloud customer Apex IT gained just that, a 724% return on investment (ROI) when they implemented these Oracle Cloud solutions in their fast-moving, rapidly-growing business. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Congratulations Apex IT! Apex IT was just announced as a winner of the Nucleus Research 11th annual Technology ROI Awards. The award, given by the analyst firm highlights organizations that have successfully leveraged IT deployments to maximize value per dollar spent. Fast Facts: Return on Investment - 724% Payback - 2 months Average annual benefit - $91,534 Cost: Benefit Ratio – 1:48 Business Benefits In addition to the ROI and cost metrics the award calls out improvements in Apex IT’s business operations—across both Sales and Marketing teams: Improved ability to identify new opportunities and focus sales resources on higher-probability deals Reduced administration and manual lead tracking—resulting in more time selling and a net new client increase of 46% Increased campaign productivity for both Marketing and Sales, including Oracle Marketing Cloud’s automation of campaign tracking and nurture programs Improved margins with more structured and disciplined sales processes—resulting in more effective deal negotiations Please join us in congratulating Apex IT on this award and their business achievements. Want More Details? Don’t take our word for it. Read the full Apex IT ROI Case Study and learn more about Apex IT’s business—including their work with Oracle Sales and Marketing Cloud on behalf of their clients in leading Sales organizations. Learn More About Oracle Sales Cloud www.oracle.com/salescloud www.facebook.com/oraclesalescloud www.youtube.com/oraclesalescloud Oracle Customer Experience and Complementary Sales Solutions Oracle Configure, Price and Quote (CPQ) Cloud Oracle Marketing Cloud Oracle Customer Experience /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • C++ : C++ Primer (Stanley Lipmann) or The C++ programming language (special edition)

    - by Kim
    I have a Computer Science degree (long2 time ago) .. I do know Java OOP but i am now trying to pick up C++. I do have C and of course data structure using C or pascal. I have started reading Bjarne Stroustrup book (The C++ Programming Language - Special Edition) but find it extremely difficult esp. some section which i don't have exposure such as Recursive Descent Parser (chapter 6). In terms of the language i don't foresee i have problem but i have problem as mentioned cos' those topic are usually covered in a Master Degree program such as construction of compiler. I just bought a book called C++ primer (Stanley Lipmann) which i heard it is a very good book for C++. Only setback is it's of course no match with the amount of information from the original C++ creator. Please advice. Thanks.

    Read the article

  • WinForms ReportViewer: slow initial rendering

    - by Bryan Roth
    UPDATE 2.4.2010 Yeah, this is an old question but I thought I would give an update. So, I'm working with the ReportViewer again and it's still rendering slowly on the initial load. The only difference is that the SQL database is on the reporting server. UPDATE 3.16.2009 I have done profiling and it's not the SQL that is making the ReportViewer render slowly on the first call. On the first call, the ReportViewer control locks up the UI thread and makes the program unresponsive. After about 5 seconds the ReportViewer will unlock the UI thread and display "Report is being generated" and then finally show the report. I know 5 seconds is not much but this shouldn't be happening. My coworker does the same thing in a program of his and the ReportViewer immediately displays the "Report is being generated" upon any request. The only difference is that the reporting server is on one server and the data is on another server. However, when I am developing the reports within SSRS, there is no delay. UPDATE I have noticed that only the first load of the ReportViewer takes a long time; each subsequent load of the same or different reports loads fast. I have a WinForms ReportViewer that I'm using in Remote processing mode that can take up to 30 seconds to render when the ReportViewer.RefreshReport() method is called. However, the report itself runs fast. This is the code to setup my ReportViewer: rvReport.ProcessingMode = ProcessingMode.Remote rvReport.ShowParameterPrompts = False rvReport.ServerReport.ReportServerUrl = New Uri(_reportServerURL) rvReport.ServerReport.ReportPath = _reportPath This is where the ReportViewer can take up to 30 seconds to render: rvReport.RefreshReport()

    Read the article

  • c# WinForms ReportViewer Performance issue using RefreshReport() and ServerReport.SetParameters()

    - by mdk
    Hi All, Currently I am writing a c# client application that uses the WinForms ReportViewer Control to display reports from a remote server. I am having performance troubles with the ReportViewer Control, to be specific with the 2 methods reportViewer.ServerReport.SetParameters() and reportViewer.RefreshReport() – they both take a really long time to complete and not just on the very first call, but on each subsequent call as well. SetParameters() takes 20 to 40 seconds (they vary greatly in time, some execute event okay fast) and RefreshReport() is a bit faster but still takes ages. I don’t think the server is the culprit, as the same report viewed using the browser renders pretty fast, about a second tops. The report in question doesn't matter as well. When I break into the process and take a look at the call stack, I see a call to Socket.DoConnect. So I thought that’s a good reason to start using fiddler and I installed it, disabled caching and fired up the app again to see which call takes that long to connect, but the performance issue was gone. By using a proxy I am having the same performance as the webbrowser. FYI: I am using NTLM authentication in the following way: reportViewer.ServerReport.ReportServerCredentials.NetworkCredentials = new NetworkCredentials() { Username = ... } I don’t have a strong webbackground, so I guess my question is: What should this tell me / What should I be looking into? (Btw: Adding fiddler to my installation package is not the solution I am looking for :)) I am grateful for any pointers. Take care, -Martin

    Read the article

  • jquery mouseover/mouseout

    - by Hulk
    In the following code once the mouse out is done the mouse over again does not work ,what is the work around for this <!DOCTYPE html> <html> <head> <style> /* div { background:#def3ca; margin:3px; width:80px; display:none; float:left; text-align:center; }*/ </style> <script src="http://code.jquery.com/jquery-latest.min.js"></script> </head> <body> <div id="playControls" style="position:absolute; bottom:0px; left:0px; right:0px;color:blue;"> Mouse over me </div> <script> $(document).ready(function() { $("#playControls").mouseover(function () { alert('here'); $("div:eq(0)").show("fast", function () { /* use callee so don't have to name the function */ $(this).next("div").show("fast", arguments.callee); }); }); $("#playControls").mouseout(function () { alert('here'); $("div").hide(2000); }); }); </script> </body> </html>

    Read the article

  • Any Other Ideas for prototyping..

    - by davehamptonusa
    I've used Douglass Crockford's Object.beget, but augmented it slightly to: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { if (spec.hasOwnProperty(node)) { that[node] = spec[node]; } } return that; }; This way you can "beget" and augment in one fell swoop. var fop = Object.spawn(bar, { a: 'fast', b: 'prototyping' }); In English that means, "Make me a new object called 'fop' with 'bar' as its prototype, but change or add the members 'a' and 'b'. You can even nest it the spec to prototype deeper elements, should you choose. var fop = Object.spawn(bar, { a: 'fast', b: Object.spawn(quux,{ farple: 'deep' }), c: 'prototyping' }); This can help avoid hopping into an object's prototype unintentionally in a long object name like: foo.bar.quux.peanut = 'farple'; If quux is part of the prototype and not foo's own object, your change to 'peanut' will actually change the protoype, affecting all objects prototyped by foo's prototype object. But I digress... My question is this. Because your spec can itself be another object and that object could itself have properties from it's prototype in your new object - and you may want those properties...(at least you should be aware of them before you decided to use it as a spec)... I want to be able to grab all of the elements from all of the spec's prototype chain, except for the prototype object itself... This would flatten them into the new object. Should I use: Object.spawn = function (o, spec) { var F = function () {}, that = {}, node = {}; F.prototype = o; that = new F(); for (node in spec) { that[node] = spec[node]; } that.prototype = o; return that; }; I would love thoughts and suggestions...

    Read the article

  • Wpf: Why is WriteableBitmap getting slower?

    - by fritz
    There is a simple MSDN example about WriteableBitmap. It shows how to draw a freehand line with the cursor by just updating one pixel when the mouse is pressed and is moving over a WPF -Image Control. writeableBitmap.Lock(); (...set the writeableBitmap.BackBuffers pixel value...) writeableBitmap.AddDirtyRect(new Int32Rect(column, row, 1, 1)); writeableBitmap.Unlock(); Now I'm trying to understand the following behaviour when moving the mouse pointer very fast: If the image/bitmap size is relatively small e.g. 800:600 pixel, then the last drawn pixel is always "synchronized" with the mouse pointers position, i.e. there is no delay, very fast reaction on mouse movements. But if the bitmap gets larger e.g. 1300:1050 pixel, you can notice a delay, the last drawn pixel always appear a bit delayed behind the moving mouse pointer. So as in both cases only one pixel gets updated with "AddDirtyRect", the reaction speed should be independent from the bitmap size!? But it seems that Writeablebitmap gets slower when it's size gets larger. Or does the whole bitmap somehow get transferred to the graphic device on every writeableBitmap.Unlock(); call , and not only the rectangle area speficied in the AddDirtyRect method? fritz

    Read the article

  • Slow query with unexpected index scan

    - by zerkms
    Hello I have this query: SELECT * FROM sample INNER JOIN test ON sample.sample_number = test.sample_number INNER JOIN result ON test.test_number = result.test_number WHERE sampled_date BETWEEN '2010-03-17 09:00' AND '2010-03-17 12:00' the biggest table here is RESULT, contains 11.1M records. The left 2 tables about 1M. this query works slowly (more than 10 minutes) and returns about 800 records. executing plan shows clustered index scan (over it's PRIMARY KEY (result.result_number, which actually doesn't take part in query)) over all 11M records. RESULT.TEST_NUMBER is a clustered primary key. if I change 2010-03-17 09:00 to 2010-03-17 10:00 - i get about 40 records. it executes for 300ms. and plan shows index seek (over result.test_number index) if i replace * in SELECT clause to result.test_number (covered with index) - then all become fast in first case too. this points to hdd IO issues, but doesn't clarifies changing plan. so, any ideas? UPDATE: sampled_date is in table sample and covered by index. other fields from this query: test.sample_number is covered by index and result.test_number too. UPDATE 2: obviously than sql server in any reasons don't want to use index. i did a small experiment: i remove INNER JOIN with result, select all test.test_number and after that do SELECT * FROM RESULT WHERE TEST_NUMBER IN (...) this, of course, works fast. but i cannot get what is the difference and why query optimizer choose such inappropriate way to select data in 1st case.

    Read the article

  • jQuery dropdown is really jumpy

    - by Matthew Westrik
    Hi, I'm making a dropdown menu with jquery (some of the code is borrowed from a tutorial by someone, although I forget who...). When I use the dropdown, it slides up and down really fast, and I can't figure it out. What do you think? HTML <div id="nav"> <ul class="topnav"> <li><a class="selected" href="#" title="home">home</a></li> <li><a href="events/" title="events calendar">events</a></li> <li><a href="photos/" title="photo gallery">photos</a></li> <li><a href="staff/" title="faculty">staff</a> <ul class="subnav"> <li><a href="#">Luke</a></li> <li><a href="#">Darth Vader</a></li> <li><a href="#">Princess Leia</a></li> <li><a href="#">Jabba the Hutt</a></li> </ul> </li> <li><a href="contact/" title="contact">contact</a></li> jQuery $(document).ready(function(){ $("ul.subnav").parent() .hover(function() { $(this).parent().find("ul.subnav").slideDown('fast').show(); //Drop down the subnav on hover... $(this).parent() .hover(function() { }, function(){ $(this).parent().find("ul.subnav").slideUp('slow'); }); $(this).parent().find("ul.subnav") .hover(function() { }, function(){ $(this).parent().find("ul.subnav").slideUp('slow'); }); }).hover(function() { $(this).addClass("subhover"); }, function(){ $(this).removeClass("subhover"); });

    Read the article

  • How to speed up WPF programs?

    - by Sam
    I love programming with and for Windows Presentation Framework. Mostly I write browser-like apps using WPF and XAML. But what really annoys me is the slowness of WPF. A simple page with only a few controls loads fast enough, but as soon as a page is a teeny weeny bit more complex, like containing a lot of data entry fields, one or two tab controls, and stuff, it gets painful. Loading of such a page can take more than one second. Seconds, indeed, especially on not so fast computers (read: the customers computers) it can take ages. Same with changing values on the page. Everything about the WPF UI is somehow sluggy. This is so mean! They give me this beautiful framework, but make it so excruciatingly slow so I'll have to apologize to our customers all the time! My Question: How do you speed up WPF? How do you profile bottlenecks? How do you deal with the slowness? Since this seems to be an universal problem with WPF, I'm looking for general advice, useful for many situations and problems. Some other related questions: What tools do you use for WPF development Tools to develop WPF or Silverlight applications

    Read the article

  • Jquery datepicker mindate, maxdate

    - by Cesar Lopez
    I have the two following datepicker objects: This is to restrict the dates to future dates. What I want: restrict the dates from current date to 30 years time. What I get: restrict the dates from current date to 10 years time. $(".datepickerFuture").datepicker({ showOn: "button", buttonImage: 'calendar.gif', buttonText: 'Click to select a date', duration:"fast", changeMonth: true, changeYear: true, dateFormat: 'dd/mm/yy', constrainInput: true, minDate: 0, maxDate: '+30Y', buttonImageOnly: true }); This is to restrict to select only past dates: What I want: restrict the dates from current date to 120 years ago time. What I get: restrict the dates from current date to 120 years ago time, but when I select a year the maximum year will reset to selected year (e.g. what i would get when page load from fresh is 1890-2010, but if I select 2000 the year select box reset to 1880-2000) . $(".datepickerPast").datepicker({ showOn: "button", buttonImage: 'calendar.gif', buttonText: 'Click to select a date', duration:"fast", changeMonth: true, changeYear: true, dateFormat: 'dd/mm/yy', constrainInput: true, yearRange: '-120:0', maxDate: 0, buttonImageOnly: true }); I need help with both datepicker object, any help would be very much appreciated. Thanks in advance. Cesar.

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Multiple asynchronous method calls to method while in a loop

    - by ranabra
    I have spent a whole day trying various ways using 'AddOnPreRenderCompleteAsync' and 'RegisterAsyncTask' but no success so far. I succeeded making the call to the DB asynchronous using 'BeginExecuteReader' and 'EndExecuteReader' but that is missing the point. The asynch handling should not be the call to the DB which in my case is fast, it should be afterwards, during the 'while' loop, while calling an external web-service. I think the simplified pseudo code will explain best: (Note: the connection string is using 'MultipleActiveResultSets') "Select ID, UserName from MyTable" 'Open connection to DB ExecuteReader(); if (DR.HasRows) {     while (DR.Read())     {         'Call external web-service         'and get current Temperature of each UserName - DR["UserName"].ToString()         'Update my local DB         Update MyTable set Temperature = ValueFromWebService where UserName =                                       DR["UserName"]         CmdUpdate.ExecuteNonQuery();     }     'Close connection etc } Accessing the DB is fast. Getting the returned result from the external web-service is slow and that at least should be handled Asynchnously. If each call to the web service takes just 1 second, assuming I have only 100 users it will take minimum 100 seconds for the DB update to complete, which obviously is not an option. There eventually should be thousands of users (currently only 2). Currently everything works, just very synchnously :) Thoughts to myself: Maybe my way of approaching this is wrong? Maybe the entire process should be called Asynchnously Many thanx

    Read the article

  • Problem with multiple event handling in JQuery

    - by Greg
    Hi everyone, I have a strange jquery problem with multiple event handlers. What I'm trying to achieve is this: User selects some text on the page If the selection is not empty - show a context menu If user clicks somewhere else - the context menu should disappear I'm having troubles with the above i.e. sometimes the context menu appears correctly, sometimes it appears and disappears straight after user makes a selection. Please help. See the relevant parts of my code below. Also when user selects a paragraph or a word by double clicking - context menu appears and quickly disappears again. var ContextMenu = { ... show: function(e) { var z = this; if (!this.shown) { if (this.contextMenu) { this.contextMenu.css({ left: e.pageX, top: e.pageY }).slideDown('fast'); this.shown = true; } var hideHandler = function() { z.hide(this); }; $(document.body).bind("click", hideHandler); } }, hide: function(hideHandler) { if (this.contextMenu && this.shown) { this.contextMenu.slideUp('fast'); this.shown = false; $(document.body).unbind("click", hideHandler); } } }; // Context menu display logic $(document.body).bind("mousedown mouseup", function(e) { if ((window.getSelection().toString() != "") && (!ContextMenu.shown)) { ContextMenu.show(e); } });

    Read the article

  • Roadblocks in creating a custom operating system

    - by Dinah
    It seems to me that the most common overly ambitious project that programmers (esp. Comp. Sci. grads) try to tackle is building your own operating system. (Trying to create your own programming language + compiler is probably even more common but not nearly as ambitious.) For those (like myself) foolish enough to try: aside from the sheer size, what are the biggest *gotcha*s or unexpected roadblocks you've encountered in trying to create your own OS from the ground up? Edit: A great OS question: http://stackoverflow.com/questions/43180/how-to-get-started-in-operating-system-development/

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • Is there any killer application for Ontology/semantics/OWL/RDF yet?

    - by narnirajesh
    Hi Guys, I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc.. I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of? To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

    Read the article

  • how to see my managed objects on the stack?

    - by smwikipedia
    I use SOS.dll in VisualStudio to debug my C# program. The program is as below. The debug command is !DumpStackObjects. class Program { static void Main() { Int32 result = f(1); } static Int32 f(Int32 i) { Int32 j = i + 1; return j; <===========BreakPoint is here } } After I input the "!dso" command in the immediate window of Visual Studio, the result is as below: OS Thread Id: 0xf6c (3948) ESP/REG Object Name Why there's nothing? I thought there should be the args i and local variable j. Thanks for my answering my naive questions...

    Read the article

  • iText PDFReader Extremely Slow To Open

    - by Wbmstrmjb
    I have some code that combines a few pages of acro forms (with acrofields in tact) and then at the end writes some JS to the entire document. It is the PdfReader in the function adding the JS that is taking extremely long to instantiate (about 12 seconds for a 1MB file). Here is the code (pretty simple): public static byte[] AddJavascript(byte[] document, string js) { PdfReader reader = new PdfReader(new RandomAccessFileOrArray(document), null); MemoryStream msOutput = new MemoryStream(); PdfStamper stamper = new PdfStamper(reader, msOutput); PdfWriter writer = stamper.Writer; writer.AddJavaScript(js); stamper.Close(); reader.Close(); byte[] withJS = msOutput.GetBuffer(); return withJS; } I have benchmarked the above and the line that is slow is the first one. I have tried reading it from a file instead of memory and tried using a MemoryStream instead of the RandomAccessFileOrArray. Nothing makes it any faster. If I add JS to a single page document, it is very fast. So my thought is that the code that combines the pages is somehow making the PDF slow to read for the PdfReader. Here is the combine code: public static byte[] CombineFiles(List<byte[]> sourceFiles) { MemoryStream output = new MemoryStream(); PdfCopyFields copier = new PdfCopyFields(output); try { output.Position = 0; foreach (var fileBytes in sourceFiles) { PdfReader fileReader = new PdfReader(fileBytes); copier.AddDocument(fileReader); } } catch (Exception exception) { //throw } finally { copier.Close(); } byte[] concat = output.GetBuffer(); return concat; } I am using PdfCopyFields because I need to preserve the form fields and so cannot use the PdfCopy or PdfSmartCopy. This combine code is very fast (few ms) and produces working documents. The AddJS code above is called after it and the PdfReader open is the slow piece. Any ideas?

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • Project Euler considered harmful

    - by xxxxxxx
    Hi, I've done some Project Euler problems and I was able to solve them very fast because I already knew all the theory behind them. I learned that "accidentaly" because I also had to learn it for university. I used to also solve olympiad problems, I wasn't very good but I was solving some of the problems. I've reached the conclusion that Project Euler problems are taken out of their context(and olympiad problems as well). That's why they are hard. Mathematics and it's theory is taught in order to make the problems easy. However, Project Euler apparently makes an invitation at making them hard again. Why ? I honestly think this is a complete waste of time. Mathematicians had centuries at their disposal in order to solve math problems and they developed theories to explain properly why certain things happen. I think olympiad problems and Project Euler problems are really useless. What's your take on Project Euler ? Do you get something out of it or do you just find formulas on some websites and implement the code fast and then get the result and solve the problem ?

    Read the article

  • database datatype size

    - by yeeen
    Just to clarify by specifying sth like VARCHAR(45) means it can take up to max 45 characters? I rmb I heard from someone a few years ago that the number in the parathesis doesn't refer to the no of characters, then the person tried to explain to me sth quite complicated which i don't understand n forgot alr. And what is the difference btn CHAR and VARCHAR? I did search ard a bit and see that CHAR gives u the max of the size of the column and it is better to use it if ur data has a fix size and use VARCHAR if ur data size varies. But if it gives u the max of the size of the column of all the data of this col, isn't it better to use it when ur data size varies? Esp if u don't know how big is ur data size gg to be. VARCHAR needs to specify the size (CHAR don't really need right?), isn't it more troublesome?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >