Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 452/969 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Solving Big Problems with Oracle R Enterprise, Part I

    - by dbayard
    Abstract: This blog post will show how we used Oracle R Enterprise to tackle a customer’s big calculation problem across a big data set. Overview: Databases are great for managing large amounts of data in a central place with rigorous enterprise-level controls.  R is great for doing advanced computations.  Sometimes you need to do advanced computations on large amounts of data, subject to rigorous enterprise-level concerns.  This blog post shows how Oracle R Enterprise enables R plus the Oracle Database enabled us to do some pretty sophisticated calculations across 1 million accounts (each with many detailed records) in minutes. The problem: A financial services customer of mine has a need to calculate the historical internal rate of return (IRR) for its customers’ portfolios.  This information is needed for customer statements and the online web application.  In the past, they had solved this with a home-grown application that pulled trade and account data out of their data warehouse and ran the calculations.  But this home-grown application was not able to do this fast enough, plus it was a challenge for them to write and maintain the code that did the IRR calculation. IRR – a problem that R is good at solving: Internal Rate of Return is an interesting calculation in that in most real-world scenarios it is impractical to calculate exactly.  Rather, IRR is a calculation where approximation techniques need to be used.  In this blog post, we will discuss calculating the “money weighted rate of return” but in the actual customer proof of concept we used R to calculate both money weighted rate of returns and time weighted rate of returns.  You can learn more about the money weighted rate of returns here: http://www.wikinvest.com/wiki/Money-weighted_return First Steps- Calculating IRR in R We will start with calculating the IRR in standalone/desktop R.  In our second post, we will show how to take this desktop R function, deploy it to an Oracle Database, and make it work at real-world scale.  The first step we did was to get some sample data.  For a historical IRR calculation, you have a balances and cash flows.  In our case, the customer provided us with several accounts worth of sample data in Microsoft Excel.      The above figure shows part of the spreadsheet of sample data.  The data provides balances and cash flows for a sample account (BMV=beginning market value. FLOW=cash flow in/out of account. EMV=ending market value). Once we had the sample spreadsheet, the next step we did was to read the Excel data into R.  This is something that R does well.  R offers multiple ways to work with spreadsheet data.  For instance, one could save the spreadsheet as a .csv file.  In our case, the customer provided a spreadsheet file containing multiple sheets where each sheet provided data for a different sample account.  To handle this easily, we took advantage of the RODBC package which allowed us to read the Excel data sheet-by-sheet without having to create individual .csv files.  We wrote ourselves a little helper function called getsheet() around the RODBC package.  Then we loaded all of the sample accounts into a data.frame called SimpleMWRRData. Writing the IRR function At this point, it was time to write the money weighted rate of return (MWRR) function itself.  The definition of MWRR is easily found on the internet or if you are old school you can look in an investment performance text book.  In the customer proof, we based our calculations off the ones defined in the The Handbook of Investment Performance: A User’s Guide by David Spaulding since this is the reference book used by the customer.  (One of the nice things we found during the course of this proof-of-concept is that by using R to write our IRR functions we could easily incorporate the specific variations and business rules of the customer into the calculation.) The key thing with calculating IRR is the need to solve a complex equation with a numerical approximation technique.  For IRR, you need to find the value of the rate of return (r) that sets the Net Present Value of all the flows in and out of the account to zero.  With R, we solve this by defining our NPV function: where bmv is the beginning market value, cf is a vector of cash flows, t is a vector of time (relative to the beginning), emv is the ending market value, and tend is the ending time. Since solving for r is a one-dimensional optimization problem, we decided to take advantage of R’s optimize method (http://stat.ethz.ch/R-manual/R-patched/library/stats/html/optimize.html). The optimize method can be used to find a minimum or maximum; to find the value of r where our npv function is closest to zero, we wrapped our npv function inside the abs function and asked optimize to find the minimum.  Here is an example of using optimize: where low and high are scalars that indicate the range to search for an answer.   To test this out, we need to set values for bmv, cf, t, emv, tend, low, and high.  We will set low and high to some reasonable defaults. For example, this account had a negative 2.2% money weighted rate of return. Enhancing and Packaging the IRR function With numerical approximation methods like optimize, sometimes you will not be able to find an answer with your initial set of inputs.  To account for this, our approach was to first try to find an answer for r within a narrow range, then if we did not find an answer, try calling optimize() again with a broader range.  See the R help page on optimize()  for more details about the search range and its algorithm. At this point, we can now write a simplified version of our MWRR function.  (Our real-world version is  more sophisticated in that it calculates rate of returns for 5 different time periods [since inception, last quarter, year-to-date, last year, year before last year] in a single invocation.  In our actual customer proof, we also defined time-weighted rate of return calculations.  The beauty of R is that it was very easy to add these enhancements and additional calculations to our IRR package.)To simplify code deployment, we then created a new package of our IRR functions and sample data.  For this blog post, we only need to include our SimpleMWRR function and our SimpleMWRRData sample data.  We created the shell of the package by calling: To turn this package skeleton into something usable, at a minimum you need to edit the SimpleMWRR.Rd and SimpleMWRRData.Rd files in the \man subdirectory.  In those files, you need to at least provide a value for the “title” section. Once that is done, you can change directory to the IRR directory and type at the command-line: The myIRR package for this blog post (which has both SimpleMWRR source and SimpleMWRRData sample data) is downloadable from here: myIRR package Testing the myIRR package Here is an example of testing our IRR function once it was converted to an installable package: Calculating IRR for All the Accounts So far, we have shown how to calculate IRR for a single account.  The real-world issue is how do you calculate IRR for all of the accounts?This is the kind of situation where we can leverage the “Split-Apply-Combine” approach (see http://www.cscs.umich.edu/~crshalizi/weblog/815.html).  Given that our sample data can fit in memory, one easy approach is to use R’s “by” function.  (Other approaches to Split-Apply-Combine such as plyr can also be used.  See http://4dpiecharts.com/2011/12/16/a-quick-primer-on-split-apply-combine-problems/). Here is an example showing the use of “by” to calculate the money weighted rate of return for each account in our sample data set.  Recap and Next Steps At this point, you’ve seen the power of R being used to calculate IRR.  There were several good things: R could easily work with the spreadsheets of sample data we were given R’s optimize() function provided a nice way to solve for IRR- it was both fast and allowed us to avoid having to code our own iterative approximation algorithm R was a convenient language to express the customer-specific variations, business-rules, and exceptions that often occur in real-world calculations- these could be easily added to our IRR functions The Split-Apply-Combine technique can be used to perform calculations of IRR for multiple accounts at once. However, there are several challenges yet to be conquered at this point in our story: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In our next blog post in this series, we will show you how Oracle R Enterprise solved these challenges.

    Read the article

  • Password manager solution: Symbian based phone and a Linux machine (Windows is not important, but wo

    - by Kent
    Hi, I currently use KeePassX to manage my passwords on my Linux (Xubuntu) machine. It's nice to have all the passwords encrypted, but sometimes I'd like to be able to tell a password when I'm on the run. Therefore I'm looking for a solution which I can synchronize with my phone. I have a Nokia N82 which is a Symbian OS v9.2 based phone for the S60 3rd Edition platform with Feature Pack 1. I like an open source solution if it's possible. In case it isn't I wouldn't mind paying for a good solution. If Windows may be added to the synchronization mix it's nice, but it's absolutely not a primary requirement (I don't even have any computer running Windows).

    Read the article

  • Adding extra fan on my setup. please advice on proper Exhaust or Intake of Fan.

    - by Pennf0lio
    Hi, I modify my Computer case so that I can add more computer fan to make my computer cool this summer. I would need your thoughts how I can improve my idea and I'm open to all of your advices. Status: Fan Position (The blue circles are the fan's location) At the side I have to intake blowing fresh air at the motherboard and at the top and back I have Exhaust fan blowing out hot air. The computer fans will be powered by 12v adapter (recycled) so that my power supply wouldn't be overloaded. I need a practical advice, I don't want to spend to much on this. Thanks.

    Read the article

  • "Sent on behalf" not appearing when delegates sending mails

    - by New Steve
    Ringo is a delegate of Paul's mailbox in Exchange, but when Ringo sends mail from Paul's mailbox, the recipient sees "Paul" in the sender field, rather than "Paul Sent On Behalf Of Ringo" Paul has set "Editor" permissions for Ringo to his mailbox, and Ringo has been granted "Send on behalf of" permissions in Exchange. Ringo did at one time have "Send As" permissions for Paul's mailbox in Exchange, but this has since been removed. This is also the case for all other delegates to Paul's mailbox. How do I make it so that emails sent by Paul's delegates show the "Sent On Behalf Of" information in the Sender field? Using Exchange Server 2007 and Microsoft Office Outlook 2007

    Read the article

  • Getting an OBB out of another OBB?

    - by Milo
    I'm working on collision resolution for my game. I just need a good way to get an object out of another object if it gets stuck. In this case a car. Here is a typical scenario. The red car is in the green object. How do I correctly get it out so the car can slide along the edge of the object as it should. I tried: if(buildings.size() > 0) { Entity e = buildings.get(0); Vector2D vel = new Vector2D(); vel.x = vehicle.getVelocity().x; vel.y = vehicle.getVelocity().y; vel.normalize(); while(vehicle.getRect().overlaps(e.getRect())) { vehicle.setCenter(vehicle.getCenterX() - vel.x * 0.1f, vehicle.getCenterY() - vel.y * 0.1f); } colided = true; } But that does not work too well. Is there some sort of vector I could calculate to use as the vector to move the car away from the object? Thanks

    Read the article

  • Split DNS clarification

    - by RidableCthulu
    I need some clarification if I understood this correctly. I've been reading about Active Directory and naming my domain, and the reason Microsoft didn't suggest using external public domain was DNS Split. If I understood correctly (and please correct me if I did) in this case I have two Domain Name Servers, both doing the same job, but one of them is internal (in my company i.e.) and the other is a public one. Did I misunderstand this and if I did could somebody explain this to me? I hope this question is not too broad for this site! Cheers.

    Read the article

  • Project Gantt chart using ADF BC

    - by shantala.sankeshwar
    This article describes simple example of using Project Gantt chart using ADF Business components.Use Case DescriptionLet us create a simple Project Gantt chart using ADF Business components & try to get the selected tasks details. Implementation stepsA project Gantt chart is used for project management. The chart lists tasks vertically and shows the duration of each task as a bar on a horizontal time line.To create a basic project gantt chart,we first need to define  2 tables as below:1)task_table with taskid,task_type,start_date & end_date 2)subtask_table with subtaskid,subtask_type,start_date, end_date &  taskidNow we can create Business components for the above 2 tables .Then we will create new jspx page -projectGantt.jspx Drop TaskView1 as Gantt->Project: Select all required columns under tasks & subtasks tabs of 'create Project Gantt chart' dialog.We have created Project Gantt chart that lists tasks & its subtasks.Now if we need to get all task details selected by the user then define taskSelectionListener for the dvt:projectGantt in jspx source page: taskSelectionListener="#{test.taskSelectlistener}" public void taskListener(TaskSelectionEvent taskSelectionEvent) {// This codes gives all the tasks selected by user System.out.println("Selected task details +taskSelectionEvent.getTask());            }Run the above page & note that it shows all details of tasks nodes & expanding these tasks nodes shows its corresponding subtasks details.Now if user selects 2 tasks,we can see that it prints the complete task details for the selected tasks.

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • Create netbook recovery image without DVD burner (virtual burner?)

    - by Dan
    I have a new Acer Aspire One which is asking to create a recovery DVD. It doesn't have a built in burner, and I don't have a USB burner. However I do have a large USB hard drive. Is there some way to get the recovery software to "burn" an image file instead of a real DVD? I know you can download a Linux recovery image, but the netbook comes with XP. I plan to install Linux on it but I'd like an XP recovery image just in case.

    Read the article

  • Can't star SSH on Ubuntu 12.10 AWS EC2

    - by Conor H
    So i've just started playing around with Ubuntu on Amazon EC2. I've just issued the following command to restart ssh but it has now "killed" ssh. sudo /etc/init.d/ssh restart I can't seem to ssh to this instance anymore. Putty just gives me "connection refused". NOTE: In this case I just restarted SSH to see the result. I didn't change any settings. This was to confirm that it was the restart command was the problem and not any configs I made. What is the correct way to restart SSH? P.S. That usually works on other Ubuntu boxes. Thanks. EDIT: It is also worth noting that when I ran that command I was taken straight back to a prompt. I didn't get any output on the console.

    Read the article

  • Running a defect php file cause error 500

    - by John Brunner
    When I address a PHP file, I always get an error 500. I've looked up the logs of my Apache server, and this displays that some includes etc. in the PHP file address files which don't exist on the server. They don't exist because I'm just testing my PHP file. But could it be achieved that the server runs the php file in every case, even when something is wrong? Every 30 seconds an entry is made in the error_log file which says [Sat Jun 09 17:55:07 2012] [error] [client 10.224.55.160] File does not exist: /var/www/html/index.html ... but there IS an index.html?!

    Read the article

  • Ctrl key is broken on HP Envy is broken, where can I find a replacement

    - by NewProger
    I work as a developer so I have to use key combinations a lot. And I have HP Envy laptop. And the Ctrl key is broken for the second time. First time I just took one from my friend. But I don't have any more friends who are willing to sacrifice their Ctrl key Anyone know where I can find one? (or rather a bunch because they are so weak and low quality) I tried to contact HP support but they did everything to prevent people from doing it. And it is impossible to reach HP support. And in my case where warranty is expired it is not possible at all according to their rules. Also I tried Googling but found nothing

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • XNA calculate normals for linesegment

    - by Gerhman
    I am quite new to 3D graphical programming and thus far only understand that normal somehow define the direction in which a vertex faces and therefore the direction in which light is reflected. I have now idea how they are calculated though, only that they are defined by a Vector3. For a visualizer that I am creating I am importing a bunch of coordinate which represent layer upon layer of line segments. At the moment I am only using a vertex buffer and adding the start and end point of each line and then rendering a linelist. The thing is now that I need to calculate the normal for the vertices of these line segments so that I can get some realistic lighting. I have no idea how to calculate these normal but I know they all face sideways and not up or down. To calculate them all I have are the start and end positions of each line segment. The below image is a representation of what I think I need to do in the case of an example layer: The red arrows represent the normal that should be calculates, the blue text represent the coordinates of the vertices and the green numbers represent their indices. I would greatly appreciate it if someone could please explain to me how I should calculate these normal.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-27

    - by Bob Rhubart
    Resource Kit: Oracle Exadata for the Communications industry In addition to several customer case studies, in video and white paper formats, this resource kit also includes a technical overview of Oracle Exadata Database Machine and a product datasheet. Registration is required for those who don't already have a free Oracle.com membership account. Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson "If you are running BPM and you are not using B2B, you might want to disable the DBMS job that refreshes the B2B materialized view," says Fusion Middleware A-Team blogger Mark Nelson. Learn how in his short post. A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser A concise how-to from Oracle Fusion Middleware A-Team blogger Stefan Koser. Thought for the Day "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." — C. A. R. Hoare Source: SoftwareQuotes.com/

    Read the article

  • SpeedTracer NETWORK_RESOURCE_RESPONSE vs NETWORK_RESOURCE_FINISH

    - by Ben Flynn
    I'm using SpeedTracer with GoogleChrome to measure the load times of requested resources. The SpeedTracer site says: NETWORK_RESOURCE_RESPONSE "Indicates that the renderer has started receiving bits from the resource loader" NETWORK_RESOURCE_FINISH "Indictes a resource load is successful and complete." In my mind that means we would always see a network resource response (bytes are arriving) before we see a finish (all bytes received). This doesn't seem to be the case at all. Here is a sample: Request Timing @33519ms for 926ms Response Timing @34445ms for -847ms Total Timing @33519ms for 78ms I'm guessing response time isn't supposed to be negative. Can someone explain this or is this a bug? I'm using Chrome 10.0.612.3 dev with a SpeedTracer I downloaded today.

    Read the article

  • How to run an Application as another user?

    - by takpar
    I use krusader for file management stuff. the problem is that apache's DocumentRoot should be under chown www-data:www-data /path/to/www. so using krusader (which is run under my account) I've not write access to /path/to/www while I really need. I don't know how other developers can continue doing things with such a restriction! I wondered if I could run krusader as www-data then I will be able to easily play with files. but using su - www-data asked me for www-data's password!! So, how can I run an application (like krusader) as another user (like www-data) in Gnome? or is there any other solution for my case? (tough I'm really curious to know the answer!) keep in mind that I know I can run it as root! but this will cause some permission problems when using cp and mkdir, you know. PS: sudo and gksudo did not help: $ gksudo -u -www-data krusader No protocol specified krusader: cannot connect to X server :0.0 Final Note: according the best answer, i did chmod u+w /path/to/www and my problem solved. but i still has not been succeeded in opening krusader as another user!

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

  • How do I fix a garbled screen on a Gateway LT3103u?

    - by paracaudex
    I've been having garbled screen problems on a Gateway LT3103u on Ubuntu for a while. I just did a fresh install of Ubuntu 11.10 and continue to have issues. I installed xubuntu-desktop in case the issues had to do with the sophisticated GNOME graphics. The problem is less bad, but it's still there. After a few minutes of using XFCE, the screen gets garbled. I assume this has something to do with the graphics card, but I don't know how to go about troubleshooting something like this. Where should I start? Update: Here is the description of the VGA card from lspci -vvv: 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] (prog-if 00 [VGA controller]) Subsystem: Acer Incorporated [ALI] Device 028c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- [disabled] Capabilities: [50] Power Management version 2 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME- Capabilities: [80] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Kernel driver in use: radeon Kernel modules: radeon Update: Setting GRUB_CMDLINE_LINUX="nomodeset" in /etc/default/grub seems to have fixed it in both Ubuntu and xubuntu-desktop. I will test it for a day or so to see if the problems recur and then post more detail with some links to an explanation. Update 2: It is possible to use this fix for Nvidia card (GTX 260) when graphics is defective after 11.10 upgrade/install? First few restarts was graphic ok, then after few restarts begins suddenly be defective and it stay so. I must returned to 11.04 because this problem and I wait for 12.04. So I hope in this fix.

    Read the article

  • Why are network printers not available in the Add Printer Wizard...when run over a network?

    - by Kev
    From a Windows 2003 server machine I browsed the network to an XP client (\computername in Explorer) then double-clicked Printers and Faxes and then Add Printer. In the wizard, normally the second screen asks if you want to install a local printer or a network printer. Well, in this case, it seems to assume I want a local printer, because the second screen is what would normally be the third screen if you chose local printer and clicked Next. I want to install a network printer on a remote machine for its local users. Is this not possible? If not, why not?

    Read the article

  • Consuming Async SOA in a WebService Proxy By Anagha Desai

    - by JuergenKress
    Consider a scenario where an application is built using SOA Async processes and needs to be consumed in a WebService Proxy. In this blog, we will be demonstrating how to implement this use case. To achieve this, we will follow a two step process: Create an Async SOA BPEL process. Consume it in a WebService Proxy. Pre-requisite: Jdeveloper with SOA extension installed. Steps: To begin with step 1, create a SOA Application and name it SOA_AsyncApp. This invokes Create SOA Application wizard. In the wizard, choose composite with BPEL process in Step 3. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: Anagha Desai,Async SOA,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Unable to mount external hard drive - Damaged file system and MFT

    - by Khalifa Abbas Lame
    I get the following error when i try to mount my external hard drive. UNABLE TO MOUNT Error mounting /dev/sdc1 at /media/khalibloo/Khalibloo2: Command-line `mount -t "ntfs" -o "uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177" "/dev/sdc1" "/media/khalibloo/Khalibloo2"' exited with non-zero exit status 13: ntfs_attr_pread_i: ntfs_pread failed: Input/output error Failed to read of MFT, mft=6 count=1 br=-1: Input/output error Failed to open inode FILE_Bitmap: Input/output error Failed to mount '/dev/sdc1': Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation for more details. It doesn't mount on windows either: "I/O Device error" it's an ntfs hard drive with a single partition Of course, i tried chkdsk /f. it reported several file segments as unreadable, but didn't say whether it fixed them or not (apparently not). also tried with the /b flag. ntfsfix reported the volume as corrupt. TestDisk was able to fix a small error with the partition table by adding the "80" flag for the active (only) partition. TestDisk also confirmed that the boot sector was fine and it matched the backup. However, when attempting to repair the MFT, it couldn't read the MFT. It also couldn't list the files on the hard drive. It says file system may be damaged. Active@ also shows that MFT is missing or corrupt. So how do i fix the file system? or the MFT?

    Read the article

  • multi player client server game - polling - wamp

    - by stonebold
    Let's say I make a client-server application. A simple game for example. Where each client polls the server every half a minute. How many clients is it possible to have before it overlaods a wamp server? Basically how robust is Apache for this kind of stuff? Getting a request, aggregating data from mysql server, and then returning the data in an xml format. What solution should I use for my case? Thank you in advance.

    Read the article

  • "The user account does not have permission to run this task"

    - by Ken
    I'm trying to get a scheduled task to run on Windows Server 2008. It has been working fine for months, and then hung, so I killed it, and now I can't get it to start. (In case it's not obvious, I'm not a Windows sysadmin by any stretch of the imagination. I inherited responsibility for this system, more or less.) The error it gives is: "The user account does not have permission to run this task". The task's "author" is "A". The task's "When running the task, use the following user account:" is "B". And my user is "C". All of A, B, C are members of the Administrators group, so I'm a bit puzzled as to why it thinks I don't have permissions to run this. Ideas?

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >