Search Results

Search found 21063 results on 843 pages for 'stochastic process'.

Page 470/843 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • Form recognition using OCR and return image of the value

    - by Jonathan
    I'm on a project that process hundreds of forms. The forms have consistent formats but are filled out by hand by different people. I need a way to quickly processing all of these data into electronic forms. OCR recognition for typed document seems mature but for hand-writting is very lacking. For this thought, let's consider a form with several fields like this. Field_1: Value1 (example, Name: John, where Name is Field and John is value) Considering that forms are structured and typed, OCR should be able to recognize the fields. However, for the values of the fields, they are written and OCR will perform very poorly. So is there a way where the fields would be recognized on the imagem, then a image chunk of the value would be returned? Thanks.

    Read the article

  • Graduate expectations versus reality

    - by Bobby Tables
    When choosing what we want to study, and do with our careers and lives, we all have some expectations of what it is going to be like. Now that I've been in the industry for almost a decade, I've been reflecting a bit on what I thought (back when I was studying Computer Science) programming working life was going to be like, and how it's actually turning out to be. My two biggest shocks (or should I say, broken expectations) by far are the sheer amount of maintenance work involved in software, and the overall lack of professionalism: Maintenance: At uni, we were all told that the majority of software work is maintenance of existing systems. So I knew to expect this in the abstract. But I never imagined exactly how overwhelming this would turn out to be. Perhaps it's something I mentally glazed over, and hoped I'd be building cool new stuff from scratch a lot more. But it really is the case that most jobs are overwhelmingly maintenance, bug fixing, and support oriented. Lack of professionalism: At uni, I always had the impression that commercial software work is very process-oriented and stringently engineered. I had images of ISO processes, reams of technical documentation, every feature and bug being strictly documented, and a generally professional environment. It came as a huge shock to realise that most software companies operate no differently to a team of students working on a large semester-long project. And I've worked in both the small agile hack shop, and the medium sized corporate enterprise. While I wouldn't say that it's always been outright "unprofessional", it definitely feels like the software industry (on the whole) is far from the strong engineering discipline that I expected it to be. Has anyone else had similar experiences to this? What are the ways in which your expectations of what our profession would be like were different to the reality?

    Read the article

  • Can I make Apache drop a connection when matching a URL?

    - by PP
    Using mod_rewrite I can construct a rule to respond with a clean error code (e.g. 404 not found, 410 gone, or 403 unauthorised) when a page is requested that I don't want to serve. But frequently I get completely erroneous requests from hackers scanning my website for vulnerabilities or possibly cross-site scripting attempts. For these customers I do not want to return a clean error - I'd rather do something else like immediately drop the connection with no response or, alternatively, hold the connection open for a lengthy period of time to frustrate the automated process. Any ideas how to accomplish this with Apache? I've read that nginx has the ability to immediately terminate a connection when a particular pattern is matched.

    Read the article

  • QA & Testing with UPK

    - by dan.gallo(at)oracle.com
    Most customers know that UPK produces both the word and excel based test scripts for UAT. Did you know that you can use UPK for QA review and bug tracking? To use UPK for QA, create content and assign it appropriately to authorized reviewers. Then have them open the developer, use customized views to find content assigned to them quickly and check out the topics. Then they can use the topic editor to review the content and provide comments right into the bubbles or use explanation frames. It makes QA-ing content this way easier than publishing and sending out .tpcs or docs for people to review. How about UPK for bug tracking? The hardest part about fixing bugs in software is reproducing the error! When you use UPK for bug tracking, it captures the exact steps the user took that gave them the error. Now development can easily walk through the process in a simulated environment to see what might have caused it, they have a documented procedure for what generated the error and they are able to better communicate with the LOB. Also, they can update or attach the simulation\documentation to any defect management software like bugzilla or something similar -all thanks to UPK.

    Read the article

  • Determinism in multiplayer simulation with Box2D, and single computer

    - by Jake
    I wrote a small test car driving multiplayer game with Box2D using TCP server-client communcations. I ran 1 instance of server.exe and 2 instance of client.exe on the same machine that I code and compile the executables. I type inputs (WASD for a simple car movement) into one of the 2 clients and I can get both clients to update the simulation. There are 2 cars in the simulation. As long as the cars do not collide, I get the same identical output on both client.exe. I can run the car(s) around for as long as I could they still update the same. However, if I start to collide the cars, very quickly they go out of sync. My tools: Windows 7, C++, MSVS 2010, Box2D, freeGlut. My Psuedocode: // client.exe void timer(int value) { tcpServer.send(my_inputs); foreach(i = player including myself) inputs[i] = tcpServer.receive(); foreach(i = player including myself) players[i].process(inputs[i]); myb2World.step(33, 8, 6); // Box2D world step simulation foreach(i = player including myself) renderer.render(player[i]); glutTimerFunc(33, timer, 0); } // server.exe void serviceloop { while(all clients alive) { foreach(c = clients) tcpClients[c].receive(&inputs[c]); // send input of each client to all clients foreach(source = clients) { foreach(dest = clients) { tcpClients[dest].send(inputs[source]); } } } } I have read all over the internet and SE the following claims (paraphrased): Box2D is deterministic as long as floating point architecture/implementation is the same. (For any deterministic engine) Determinism is gauranteed if playback of recorded inputs is on the same machine with exe compiled using same compiler and machine. Additionally my server.exe and client.exe gameloop is single thread with blocking socket calls and fixed time step. Question: Can anyone explain what I did wrong to get different Box2D output?

    Read the article

  • How to get partition information from non-booting server?

    - by gravyface
    Need to manually rebuild a mirrored array on a server and am in the process of reinstalling SBS 2003 on it. However, it's a Dell server, and know that there's the Dell FAT32 diagnostics partition, a system partition, and a data partition, but do not know the size of each. Planning on reinstalling SBS 2003, all applications on the server, and then doing a System State restore, but figured that not having the correct partitions will cause some grief: am I right? Almost thinking that the size of the partitions shouldn't matter, but not positive. Question: should I care about the size of the partition? If so, how can I get this partition information from a non-booting drive? We have an Acronis image of the one working disk and the partitions are mounted/viewable in Explorer on a workstation, but I'm not sure where the Logical Disk Manager/Disk Management data is stored and/or if there's a way to retrieve it without having a working Windows installation.

    Read the article

  • ArrayIndexOutOfBounds exception in CoyoteAdapter.normalize()

    - by Alex
    I'm working with an application that uses Tomcat 5.0.28 for sending and receiving AS2 messages. At times, it's throwing the following exception on receiving an MDN receipt for a transmission: An exception or error occurred in the container during the request processing java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.coyote.tomcat5.CoyoteAdapter.normalize(CoyoteAdapter.java:483) at org.apache.coyote.tomcat5.CoyoteAdapter.postParseRequest(CoyoteAdapter.java:239) at org.apache.coyote.tomcat5.CoyoteAdapter.service(CoyoteAdapter.java:158) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:799) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:705) at org.apache.tomcat.util.net.TcpWorkerThread.runIt(PoolTcpEndpoint.java:577) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:683) at java.lang.Thread.run(Unknown Source) I've found a report of this issue regarding v. 5.0.25 (here), with a followup note that it was resolved in 5.0.27. However, as above, the version number used in this app is 5.0.28. Any suggestions for how to find out what might be triggering this error?

    Read the article

  • smtp sasl authentication failure

    - by cromestant
    hello, I have configured and fixed almost all the problems with my postfix +courier + mysql setup for virtual mailboxes. I can now receive mail and send it from webmail (squirrel). BUT, what I can't do is authenticate from outside client. Since my isp blocks port 25 I setup postfix to work on 1025 for smtp and setup verbose loging. Here is the verbose log of a failed authentication process LOG Authentication for imap and pop3 seem to be working but this one is not. Here is the postconf -n output. Also through mysql I can verify that it is trying to validate through the system, running a query that returns the encrypted password stored in the database. I can't seem to find the error for this. thank you in advance

    Read the article

  • Configuring WS-Security with PeopleSoft Web Services

    - by Dave Bain
    I was speaking with a customer a few days ago about PeopleSoft Web Services.  The customer created a web service but when they went to deploy it, they had so many problems configuring ws-security, they pulled the service.  They spent several days trying to get it working but never got it working so they've put it on hold until they have time to work through the issues. Having gone through the process of configuring ws-security myself, I understand the complexity.  There is no magic 'easy' button to push.  If you are not familiar with all the moving parts like policies, certificates, public and private keys, credential stores, and so on, it can be a daunting task.  PeopleBooks documentation is good but does not offer a step-by-step example to follow.  Fear not, for those that want more help, there is a place to go. PeopleSoft released a Mobile Inventory Management application over a year ago.  It is a mobile app built with Oracle Fusion Application Development Framework (ADF) that accesses PeopleSoft content through standard web services.  Part of the installation of this app is configuring ws-security for the web services used in the application.  Appendix A of the PeopleSoft FSCM91 Mobile Inventory Management Installation Guide is called Configuring WS-Security for Mobile Inventory Management.  It is a step-by-step guide to configure ws-security between a server running Oracle Web Server Management (OWSM) and PeopleSoft Integration Broker.  Your environment might be different, but the steps will be similar, and on the PeopleSoft side, Integration Broker will remain a constant. You can find the installation guide on Oracle Suport.  Sign in to https://support.us.oracle.com and search for document 1290972.1.  Read through Appendix A for more details about how to set up ws-security with PeopleSoft web services.

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • Building uEFI bootable ISO and USB for Windows 7 deployment

    - by Darragh
    I have been trying to build up a window's 7 and 2k8 EFI deployment ISO or USB. But struggling to to even get an ISO to boot from even VMware Workstation EFI implementation. The problem is there is no clear requirement to what the EFI bootloader is looking for, "e.g EFI boot file" even ISO's and USB's that are bootable don't find the required .efi file. I'd like to know what is the process EFI bootloader follows to boot the EFI file. e.g; in a EFI windows system its; C:\Windows\Boot\EFI\bootmgfw.efi From DVD it's; F:\efi\microsoft\boot\cdboot.efi from what people tell me it's on USB; G:\efi\boot\bootx64.efi (bootmgfw.efi renamed) I've been testing on a HP notebook with EFI 2.0 and VMware Workstation 8.0 with .vmx file firmware = "efi"

    Read the article

  • Free, Linux-based rescue CD for Windows machines

    - by Adam Matan
    Hi, Too often, I'm being called to help a friend who screwed a Windows machine by some creative methods. Th usual remedy is backing up the hard drive contents and reinstalling. Right now, this is done by removing the defected hard drive to my machine. I figured out that using a rescue disk running some version of Linux might ease the process. I'm looking for: NTFS access Partition tools Large variety of drivers (Network, Hard drives, etc.) GUI and some rescue wizards a great plus. Any ideas? Adam

    Read the article

  • Reformat a WindowsXP drive: Quick, regular, or leave the current file-system

    - by Julie
    When reformatting, Windows XP ask me to choose from these formatting methods. (Implying that ALL of them are "formatting methods"... even #3) Reformat using NTFS (quick) Reformat using NTFS Leave the current file-system intact (no changes) What choice #3 really mean? Does it mean: A. Leave the current file-system (whatever file-system is already in use) and reformat to match that. (ie. If you current have NTFS, reformat to that again. If you currently have FAT32, reformat to that again. That is: Reformat without changing to a different file-system. Leave the current type.) or... B. Do absolutely nothing. Don't format. Don't delete any of my files. Abort the formatting process entirely.

    Read the article

  • Events Driven Library XNA C#

    - by SchautDollar
    Language: C# w/ XNA Framework Relevant and Hopefully Helpful Background Info: I am making a library using the XNA framework for games I make with XNA. The Library has a folder(Namespace) dedication to the GUI. The GUI Controls inherit a base class hooked with the appropriate Interfaces. After a control is made, the programmer can hook the control with a "Frame" or "Module" that will tell the controls when to update and draw with an event. To make a "Frame" or "Module", you would inherit a class with the details coded in. (Kind of how win forms does it.) My reason for doing this is to simplify the process of creating menus with intractable controls. The only way I could think of for making the events for all the controls to function without being class specific would be to typecast a control to an object and typecast it back. (As I have read, this can be terribly inefficient.) Problem: Unfortunately, after I have implemented interfaces into the base classes and changed public delegate void ClickedHandler(BaseControl cntrl); to public delegate void ClickedHandler(Object cntrl, EventArgs e); my game has decreased in performance. This performance could be how I am firing the events, as what happens is the one menu will start fine, but then slowly but surely will freeze up. Every other frame works just fine, I just think it has something to do with the events and... that is why I am asking about them. Question: Is there a better more industry way of dealing with GUI Libraries other then using and implementing Events? Goal: To create a reusable feature rich XNA Control Library implementing performance enhancing standards and so on. Thank-you very much for taking your time to read this. I also hope this will help others possibly facing what I am facing right now.

    Read the article

  • Very Cool &ndash; Miami 311 System for tracking citizen service requests (Windows Azure, Silverlight

    - by Jim Duffy
    Having grown up in South Florida this short, but very enlightening, video explaining how the City of Miami has implemented a 311 citizen service request system using Windows Azure, Silverlight and Bing Maps definitely caught my attention. Miami311 The Miami311 System is a Windows Azure/Silverlight-based solution which enables City of Miami citizens report and track issues reported to city management. The system uses Bing Maps to plot the location and relevant information about each issue reported. Citizens now have the ability to easily see the status of the issue without having to call the city office. What I found interesting were a couple of benefits that a metropolitan area such as Miami can take advantage of in Windows Azure cloud-based solution. For the city of Miami, both benefits center around the weather. Of course the threat of a hurricane is a real issue in South Florida and what better way to make sure your site stays up during a hurricane then to have the site hosted far away from the eye of the storm. Using a Windows Azure cloud-based architecture the City of Miami is able to host the application within the Microsoft data centers safely away from any hurricane passing through South Florida. The second benefit is the inherent scalability of a Windows Azure based solution. During a severe weather event like thunderstorms or even worse, a hurricane, downed trees and power lines are a commonly reported problem. Being able to quickly scale up the computing resources required to handle the spike in citizens reporting these types of problems on the site is a huge benefit. Once the weather event has passed and downed tree reports begin to subside they can quickly reverse the process and scale the system back down to pre-storm levels. It’s kind of day-to-day kind of stuff but very cool stuff nonetheless. Have a day. :-|

    Read the article

  • Scanline filling of polygons that share edges and vertices

    - by Belgin
    In this picture (a perspective projection of an icosahedron), the scanline (red) intersects that vertex at the top. In an icosahedron each edge belongs to two triangles. From edge a, only one triangle is visible, the other one is in the back. Same for edge d. Also, in order to determine what color the current pixel should be, each polygon has a flag which can either be 'in' or 'out', depending upon where on the scanline we currently are. Flags are flipped according to the intersection of the scanline with the edges. Now, as we go from a to d (because all edges are intersected with the scanline at that vertex), this happens: the triangle behind triangle 1 and triangle 1 itself are set 'in', then 2 is set in and 1 is 'out', then 3 is set 'in', 2 is 'out' and finally 3 is 'out' and the one behind it is set 'in', which is not the desired behavior because we only need the triangles which are facing us to be set 'in', the rest should be 'out'. How do process the edges in the Active Edge List (a list of edges that are currently intersected by the scanline) so the right polys are set 'in'? Also, I should mention that the edges are unique, which means there exists an array of edges in the data structure of the icosahedron which are pointed to by edge pointers in each of the triangles.

    Read the article

  • apt-get doesnt download files from NFS location

    - by Pravesh
    I have switched to unix from last 3 months and trying to understand install process and in particular apt-get. I am able to successfully install and download the packages when I configure my repository on http location in /etc/apt/sources.list file. e.g. deb http://web.myspqce.com/u/eng/rose/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free This command will download(/var/cache/apt/archive) and install the package when i use apt-get install When I change the source location to file instead of http(nfs mount point), the package is getting installed but NOT getting downloaded in /var/cache/apt/archive. deb file:/deb_repository/debian-mirror-squeeze-amd64/mirror/ftp.us.debian.org/debian/ squeeze main contrib non-free Please let me know if there is any configuration or settings that i have to make to let apt-get to both download and install package when i use (nfs)file:/ instead of http:/ in sources.list. To achieve this, I can use apt-get --downlaod-only and then use apt-get install for both download and install in two separate calls, but I want to know why package is not getting downloaded with apt-get install but only getting installed when used with file:/ in sources.list

    Read the article

  • Finding day of week in batch file? (Windows Server 2008)

    - by Daniel Magliola
    I have a process I run from a batch file, and i only want to run it on a certain day of the week. Is it possible to get the day of week? All the example I found, somehow rely on "date /t" to return "Friday, 12/11/2009", however, in my machine, "date /t" returns "12/11/2009". No weekday there. I've already checked the "regional settings" for my machine, and the long date format does include the weekday. The short date format doesn't, but i'd really rather not change that, since it'll affect a bunch of stuff I do. Any ideas here? Thanks! Daniel

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • MySQL on a laptop for remote workers - MyISAM keeps corrupting

    - by Jonathon
    We have an application that is used by remote, mobile workers. It intalls WAMP (Server2Go) on a laptop and uses MySQL to store data locally. All tables are MyISAM. Once a day, the workers sync the database to our central server via HTTP scripts that query the data and post it to our site. The problem is that many of these laptop database tables are corrupting continually. It appears that MySQL acts like it saves the information (I don't get any query errors), but the table gets corrupt. I have to repair the table constantly (which removes several rows of data in the process). Does anyone have any ideas about how to work around this problem? Would it be wise to switch to InnoDB on the laptops? How about a different database system altogether. I have looked at MySQL Embedded, but it appears to be the same engine as the regular MySQL.

    Read the article

  • The Endeca UI Design Pattern Library Returns

    - by Joe Lamantia
    I'm happy to announce that the Endeca UI Design Pattern Library - now titled the Endeca Discovery Pattern Library - is once again providing guidance and good practices on the design of discovery experiences.  Launched publicly in 2010 following several years of internal development and usage, the Endeca Pattern Library is a unique and valued source of industry-leading perspective on discovery - something I've come to appreciate directly through  fielding the consistent stream of inquiries about the library's status, and requests for its rapid return to public availability. Restoring the library as a public resource is only the first step!  For the next stage of the library's evolution, we plan to increase the scope of the guidance it offers beyond user interface design to the broader topic of discovery.  This could include patterns for architecture at the systems, user experience, and business levels; information and process models; analytical method and activity patterns for conducting discovery; and organizational and resource patterns for provisioning discovery capability in different settings.  We'd like guidance from the community on the kinds of patterns that are most valuable - so make sure to let us know. And we're also considering ways to increase the number of patterns the library offers, possibly by expanding the set of contributors and the authoring mechanisms. If you'd like to contribute, please get in touch. Here's the new address of the library: http://www.oracle.com/goto/EndecaDiscoveryPatterns And I should say 'Many thanks' to the UXDirect team and all the others within the Oracle family who helped - literally - keep the library alive, and restore it as a public resource.

    Read the article

  • Booting Ubuntu 12.04 in Unity the Windows Theme & Icon Theme reversed back to and always locked to Ubuntu default

    - by Antonio
    I did set up last year Faience-Ocre as my Icon Theme and Adwaita Cupertino L Unity as my Windows Theme (the GTK+ thme was unchanged to Adwaita (default)). It has worked perfectly well until 3 days ago when starting my PC I saw the Ubuntu default showing up for Windows & Icon Theme. I noticed that at start-up the disk access LED is not lit up continuously as before but at moment stops reading for a few seconds (up to 15 s) then complete the disk reading process. When all was working well this LED would lit up continuously. Another thing is that GNOME applications are as well not working well as previously Nautilus and Gedit now don't use the global menu in the system bar but a local window menu. Nautilus - Nemo before the incident Nautilus - Nemo now ... I did open dconf to check the desktop settings in org-gnome-desktop-wm-preferences and everything is looking good. When I change the settings in the app Advanced Settings in the Theme folder I see the respective value changing in dconf. However, there is no change on my desktop. It looks like it's crippled and GNOME related. [Update 1]: I have the same defect as referenced @ ubuntu theme suddenly changed to default and it's not coming back! instead of my GTK theme I get a classic, Windows-95-like grayish theme ... However one of the solution mentionned: http://www.webupd8.org/2011/06/fix-ubuntu-linux-mint-theme-changing-to.html is not working at all, even for 20 s up to 60 s delay.

    Read the article

  • Outlook 2010 exchange setup prompts for [email protected] rather than [email protected]

    - by Force Flow
    We use a hosted exchange service. When users want to set up Outlook 2010 to access their account, they open Outlook and run through the configuration steps. Autodiscover is enabled, and in the user's active directory profile, their email address is in the email field. However, when the configuration process reaches a point where they are prompted for their email account's username and password, their active directory username is filled in by default instead of their email address. Is there a way to fix that? Users get confused and try to enter their email password over and over again and wonder why it doesn't work (and completely miss/ignore the "use another account" button even though they have instructions right in front of them). I'm also using the Office 2010 ADM's in group policy, but I haven't yet seen an option to specify what gets auto-populated in that windows security prompt.

    Read the article

  • Disable creation of appointment when typing into Outlook calendar

    - by Alexander L. Belikoff
    Outlook (both 2010 and 2007) has a "feature" that creates an appointment if I type some text while the calendar window has focus. This keeps biting me every now and then when I erroneously have focus on the calendar window and start typing. To me, this feature is twice as annoying: There is no easy way to escape out of it - I end up using mouse to select the newly created bogus event and then to delete it. Sometimes, if the focus is on an already existing event, such spurious typing changes the text without an easy way to undo. Question: is there a way to make Outlook stop creating/modifying events upon typing, forcing me instead to double click an event or Ctrl-N in order to process my input?

    Read the article

  • Checking for orphaned snapshots - ESXi5

    - by Tim Alexander
    So we had some issues with our passive mail node over the weekend doing vmtools updates and to resolve a problem we had to revert to a snapshot and then reseed all the databases across. All in all everything seemed fine, the server works and CCR copy status is running fine. I used the "Delete All" option this morning to remove the snapshot and the process according to vCenter has completed with no errors and no "Needs Consolidation" flag. This all seems fine until I check the Datastore that holds the VM on our SAN and I can clearly see snapshots that are pretty big [see attached image]. These do not seem to be changing size and the data modified is around the time the works were started for the vmtools update. Does this possibly mean that at some stage, possibly during reversion or hard resetting of the VM, that they have become orphaned? Are there any methods to check orphaned status of snapshots? We are running ESXi5.0 Update 1 with storage provide by an EMC SAN. Enterprise plus is the license level.

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >