Search Results

Search found 25180 results on 1008 pages for 'post processing'.

Page 582/1008 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • AutoVue 20.2 for Agile Released

    - by Kerrie Foy
    I saw an important post on the Oracle's AutoVue Enterprise Visualization Blog that I wanted to share with you all in the Agile community.  This was originally posted by Angus Graham here. AutoVue 20.2 for Agile Released Oracle’s AutoVue 20.2 for Agile PLM is now available on Oracle’s Software Delivery Cloud. This latest release allows Agile PLM customers to take advantage of new AutoVue 20.2 features in the following Agile PLM environments: 9.3.1.x; 9.3.0.  AutoVue 20.2 delivers improvements in the following areas. New Format Support: AutoVue 20.2 adds support for the latest versions of popular file formats including: ECAD: Cadence Concept HDL 16.5, Allegro Layout 16.5, Orcad Capture 16.5, Board Station ASCII Symbol Geometry, Cadence Cell Library MCAD: CATIA V5 R21, PTC Creo Parametric 1.0, Creo Element\Direct Modeling 17.10, 17.20, 17.25, 17.30, 18.00, SolidWorks 2012, SolidEdge ST3 & ST4, PLM XML 2D CAD: Creo Element/Direct Drafting 17.10 to 18.00 Office: MS Office 2010: Word, Excel, PowerPoint, Outlook Enhancements to AutoVue enterprise readiness: reliability and performance improvements, as well as security enhancements which adhere to Oracle’s Software Security Assurance standards Updated version of AutoVue Document Print Service offerings, which include the ability to select CAD layers for printing  For further details, check out the What’s New in AutoVue 20.2 datasheet

    Read the article

  • Drinking Our Own Champagne: Fusion Accounting Hub at Oracle

    - by Di Seghposs
    A guest post by Corey West, Senior Vice President, Oracle's Corporate Controller and Chief Accounting Officer There's no better story to tell than one about Oracle using its own products with blowout success. Here's how this one goes. As you know, Oracle has increased its share of the software market through a number of high-profile acquisitions. Legally combining companies is a very complicated process -- it can take months to complete, especially for the acquisitions with offices in several countries, each with its own unique laws and regulations. It's a mission critical and time sensitive process to roll an acquired company's legacy systems (running vital operations, such as accounts receivable and general ledger (GL)) into the existing systems at Oracle. To date, we've run our primary financial ledgers in E-Business Suite R12 -- and we've successfully met the requirements of the business and closed the books on time every single quarter. But there's always room for improvement and that comes in the form of Fusion Applications. We are now live on Fusion Accounting Hub (FAH), which is the first critical step in moving to a full Fusion Financials instance. We started with FAH so that we could design a global chart of accounts. Eventually, every transaction in every country will originate from this global chart of accounts -- it becomes the structure for managing our business more uniformly. In conjunction, we're using Oracle Hyperion Data Relationship Management (DRM) to centralize and automate governance of our global chart of accounts and related hierarchies, which will help us lower our costs and greatly reduce risk. Each month, we have to consolidate data from our primary general ledgers. We have been able to simplify this process considerably using FAH. We can now submit our primary ledgers running in E-Business Suite (EBS) R12 directly to FAH, eliminating the need for more than 90 redundant consolidation ledgers. Also we can submit incrementally, so if we need to book an adjustment in a primary ledger after close, we can do so without re-opening it and re-submitting. As a result, we have earlier visibility to period-end actuals during the close. A goal of this implementation, and one that we successfully achieved, is that we are able to use FAH globally with no customization. This means we have the ability to fully deploy ledger sets at the consolidation level, plus we can use standard functionality for currency translation and mass allocations. We're able to use account monitoring and drill down functionality from the consolidation level all the way through to EBS primary ledgers and sub-ledgers, which allows someone to click through a transaction appearing at the consolidation level clear through to its original source, a significant productivity enhancement when doing research. We also see a significant improvement in reporting using Essbase cube and Hyperion Smart View. Specifically, "the addition of an Essbase cube on top of the GL gives us tremendous versatility to automate and speed our elimination process," says Claire Sebti, Senior Director of Corporate Accounting at Oracle. A highlight of this story is that FAH is running in a co-existence environment. Our plan is to move to Fusion Financials in steps, starting with FAH. Next, our Oracle Financial Services Software subsidiary will move to a full Fusion Financials instance. Then we'll replace our EBS instance with Fusion Financials. This approach allows us to plan in steps, learn as we go, and not overwhelm our teams. It also reduces the risk that comes with moving the entire instance at once. Maria Smith, Vice President of Global Controller Operations, is confident about how they've positioned themselves to uptake more Fusion functionality and is eager to "continue to drive additional efficiency and cost savings." In this story, the happy customers are Oracle controllers, financial analysts, accounting specialists, and our management team that get earlier access to more flexible reporting. "Fusion Accounting Hub simplifies our processes and gives us more transparency into account activity," raves Alex SanJuan, Senior Director, Record to Report Strategic Process Owner. Overall, the team has been very impressed with the usability and functionality of FAH and are pleased with the quantifiable improvements. Claire Sebti states, "Our WD5 close activities have been reduced by at least four hours of system processing time, just for the consolidation group." Fusion Accounting Hub is an inspiring beginning to our Fusion Financials implementation story. There's no doubt it's going to be an international bestseller! Corey West, Senior Vice President Oracle's Corporate Controller and Chief Accounting Officer

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • AWS SSL Load Balancer

    - by Jay Francis
    OK, I am looking for some pointers. Basically I have a white-label app/site that will allow users to setup their own domain to use for their customer front-end. We have 2 dedicated servers and a load balancer. The problem is SSL, we were thinking about using AWS ELB to handle the SSL loadbalancing, but cant seem to figure out if it will properly handle it, it seems to be setup to work with EC2 instances, but we are using externally hosted servers via a loadbalancer. A blog post by AWS looks similar to what we need but it only seems to work with EC2 instances. http://aws.typepad.com/aws/2011/08/elastic-load-balancer-ssl-support-options.html Anyone had experience setting ELS SSL load balancers up to work with external servers?

    Read the article

  • SQL Rally Pre-Con: Data Warehouse Modeling – Making the Right Choices

    - by Davide Mauri
    As you may have already learned from my old post or Adam’s or Kalen’s posts, there will be two SQL Rally in North Europe. In the Stockholm SQL Rally, with my friend Thomas Kejser, I’ll be delivering a pre-con on Data Warehouse Modeling: Data warehouses play a central role in any BI solution. It's the back end upon which everything in years to come will be created. For this reason, it must be rock solid and yet flexible at the same time. To develop such a data warehouse, you must have a clear idea of its architecture, a thorough understanding of the concepts of Measures and Dimensions, and a proven engineered way to build it so that quality and stability can go hand-in-hand with cost reduction and scalability. In this workshop, Thomas Kejser and Davide Mauri will share all the information they learned since they started working with data warehouses, giving you the guidance and tips you need to start your BI project in the best way possible?avoiding errors, making implementation effective and efficient, paving the way for a winning Agile approach, and helping you define how your team should work so that your BI solution will stand the test of time. You'll learn: Data warehouse architecture and justification Agile methodology Dimensional modeling, including Kimball vs. Inmon, SCD1/SCD2/SCD3, Junk and Degenerate Dimensions, and Huge Dimensions Best practices, naming conventions, and lessons learned Loading the data warehouse, including loading Dimensions, loading Facts (Full Load, Incremental Load, Partitioned Load) Data warehouses and Big Data (Hadoop) Unit testing Tracking historical changes and managing large sizes With all the Self-Service BI hype, Data Warehouse is become more and more central every day, since if everyone will be able to analyze data using self-service tools, it’s better for him/her to rely on correct, uniform and coherent data. Already 50 people registered from the workshop and seats are limited so don’t miss this unique opportunity to attend to this workshop that is really a unique combination of years and years of experience! http://www.sqlpass.org/sqlrally/2013/nordic/Agenda/PreconferenceSeminars.aspx See you there!

    Read the article

  • #OOW 2012: Big Data and The Social Revolution

    - by Eric Bezille
    As what was saying Cognizant CSO Malcolm Frank about the "Futur of Work", and how the Business should prepare in the face of the new generation  not only of devices and "internet of things" but also due to their users ("The Millennials"), moving from "consumers" to "prosumers" :  we are at a turning point today which is bringing us to the next IT Architecture Wave. So this is no more just about putting Big Data, Social Networks and Customer Experience (CxM) on top of old existing processes, it is about embracing the next curve, by identifying what processes need to be improve, but also and more importantly what processes are obsolete and need to be get ride of, and new processes put in place. It is about managing both the hierarchical and structured Enterprise and its social connections and influencers inside and outside of the Enterprise. And this does apply everywhere, up to the Utilities and Smart Grids, where it is no more just about delivering (faster) the same old 300 reports that have grown over time with those new technologies but to understand what need to be looked at, in real-time, down to an hand full relevant reports with the KPI relevant to the business. It is about how IT can anticipate the next wave, and is able to answers Business questions, and give those capabilities in real-time right at the hand of the decision makers... This is the turning curve, where IT is really moving from the past decade "Cost Center" to "Value for the Business", as Corporate Stakeholders will be able to touch the value directly at the tip of their fingers. It is all about making Data Driven Strategic decisions, encompassed and enriched by ALL the Data, and connected to customers/prosumers influencers. This brings to stakeholders the ability to make informed decisions on question like : “What would be the best Olympic Gold winner to represent my automotive brand ?”... in a few clicks and in real-time, based on social media analysis (twitter, Facebook, Google+...) and connections link to my Enterprise data. A true example demonstrated by Larry Ellison in real-time during his yesterday’s key notes, where “Hardware and Software Engineered to Work Together” is not only about extreme performances but also solutions that Business can touch thanks to well integrated Customer eXperience Management and Social Networking : bringing the capabilities to IT to move to the IT Architecture Next wave. An example, illustrated also todays in 2 others sessions, that I had the opportunity to attend. The first session bringing the “Internet of Things” in Oil&Gaz into actionable decisions thanks to Complex Event Processing capturing sensors data with the ready to run IT infrastructure leveraging Exalogic for the CEP side, Exadata for the enrich datasets and Exalytics to provide the informed decision interface up to end-user. The second session showing Real Time Decision engine in action for ACCOR hotels, with Eric Wyttynck, VP eCommerce, and his Technical Director Pascal Massenet. I have to close my post here, as I have to go to run our practical hands-on lab, cooked with Olivier Canonge, Christophe Pauliat and Simon Coter, illustrating in practice the Oracle Infrastructure Private Cloud recently announced last Sunday by Larry, and developed through many examples this morning by John Folwer. John also announced today Solaris 11.1 with a range of network innovation and virtualization at the OS level, as well as many optimizations for applications, like for Oracle RAC, with the introduction of the lock manager inside Solaris Kernel. Last but not least, he introduced Xsigo Datacenter Fabric for highly simplified networks and storage virtualization for your Cloud Infrastructure. Hoping you will get ready to jump on the next wave, we are here to help...

    Read the article

  • Guide.BeginShowMessageBox wrapper

    - by Daniel Moth
    While coding for Windows Phone 7 using Silverlight, I was really disappointed with the built-in MessageBox class, so I found an alternative. My disappointment was the fact that: Display of the messagebox causes the phone to vibrate (!) Display of the messagebox causes the phone to make an annoying sound. You can only have "ok" and "cancel" buttons (no other button captions). I was using the messagebox something like this: // Produces unwanted sound and vibration. // ...plus no customization of button captions. if (MessageBox.Show("my message", "my caption", MessageBoxButton.OKCancel) == MessageBoxResult.OK) { // Do something Debug.WriteLine("OK"); } …and wanted to make minimal changes throughout my code to change it to this: // no sound or vibration // ...plus bonus of customizing button captions if (MyMessageBox.Show("my message", "my caption", "ok, got it", "that sucks") == MyMessageBoxResult.Button1) { // Do something Debug.WriteLine("OK"); } It turns out there is a much more powerful class in the XNA framework that delivered on my requirements (and offers even more features that I didn't need like choice of sounds and not blocking the caller): Guide.BeginShowMessageBox. You can use it simply by adding an assembly reference to Microsoft.Xna.Framework.GamerServices. I wrote a little wrapper for my needs and you can find it here (ready to enhance with your needs): MyMessageBox.cs.txt. Comments about this post welcome at the original blog.

    Read the article

  • How can I rank teams based off of head to head wins/losses

    - by TMP
    I'm trying to write an algorithm (specifically in Ruby) that will rank teams based on their record against each other. If a team A and team B have won the same amount of games against each other, then it goes down to point differentials. Here's an example: A beat B two times B beats C one time A beats D three times C bests D two times D beats C one time B beats A one time Which sort of reduces to A[B] = 2 B[C] = 1 A[D] = 3 C[D] = 2 D[C] = 1 B[A] = 1 Which sort of reduces to A[B] = 1 B[C] = 1 A[D] = 3 C[D] = 1 D[C] = -1 B[A] = -1 Which is about how far I've got I think the results of this specific algorithm would be: A, B, C, D But I'm stuck on how to transition from my nested hash-like structure to the results. My psuedo-code is as follows (I can post my ruby code too if someone wants): For each game(g): hash[g.winner][g.loser] += 1 That leaves hash as the first reduction above hash2 = clone of hash For each key(winner), value(losers hash) in hash: For each key(loser), value(losses against winner): hash2[loser][winner] -= losses Which leaves hash2 as the second reduction Feel free to as me question or edit this to be more clear, I'm not sure of how to put it in a very eloquent way. Thanks!

    Read the article

  • How can I hard reset a USB device?

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on superuser.com as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Retrieve the full ASP.NET Form Buffer as a String

    - by Rick Strahl
    Did it again today: For logging purposes I needed to capture the full Request.Form data as a string and while it’s pretty easy to retrieve the buffer it always takes me a few minutes to remember how to do it. So I finally wrote a small helper function to accomplish this since this comes up rather frequently especially in debugging scenarios or in the immediate window. Here’s the quick function to get the form buffer as string: /// <summary> /// Returns the content of the POST buffer as string /// </summary> /// <returns></returns> public static string FormBufferToString() { HttpRequest Request = HttpContext.Current.Request; if (Request.TotalBytes > 0) return Encoding.Default.GetString(Request.BinaryRead(Request.TotalBytes)); return string.Empty; } Clearly a simple task, but handy to have in your library for reuse. You probably don’t want to call this if you have a massive inbound form buffer, or if the data you’re retrieving is binary. It’s probably a good idea to check the inbound content type before calling this function with something like this: var formBuffer = string.Empty; if (Request.ContentType.StartsWith("text/") || Request.ContentType == "application/x-www-form-urlencoded") ) { formBuffer = FormBufferToString(); } to ensure you’re working only on content types you can actually view as text. Now if I can only remember the name of this function in my library – it’s part of the static WebUtils class in the West Wind Web Toolkit if you want to check out a number of other useful Web helper functions.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Reflector Pro has now been released!

    - by CliveT
    After moving into the .NET division in May , and having a great time working on Reflector, I'm pleased to say that the results of that work are now available. Reflector Pro has now been released! The old Reflector as you know and love it is still available free of charge, and as part of this project we've fixed a number of bugs in the de-compilation that have been around for a long time. The Pro version comes as an add-in for Visual Studio - this offers dynamic de-compilation and generation of pdb files which allow you to step into the de-compiled code. Alex has some good pictures of this functionality on his beta post from around a month ago. Thanks to the other guys who've worked on this for taking me along for the ride - Alex, Andrew, Bart and Jason. Stephen did some great usability work, Chris Alford did some great technical authoring and Laila handled the launch publicity. Like all projects, there's always more I'd like to have done, but what we have looks like a pretty powerful addition to the developer's set of tools to me. Please try it and give us feedback on the forum.

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • SSIS Dashboard v0.4

    - by Davide Mauri
    Following the post on SSISDB script on Gist, I’ve been working on a HTML5 SSIS Dashboard, in order to have a nice looking, user friendly and, most of all, useful, SSIS Dashboard. Since this is a “spare-time” project, I’ve decided to develop it using Python since it’s THE data language (R aside), it’s a beautiful & powerful, well established and well documented and with a rich ecosystem around. Plus it has full support in Visual Studio, through the amazing Python Tools For Visual Studio plugin, I decided also to use Flask, a very good micro-framework to create websites, and use the SB Admin 2.0 Bootstrap admin template, since I’m all but a Web Designer. The result is here: https://github.com/yorek/ssis-dashboard and I can say I’m pretty satisfied with the work done so far (I’ve worked on it for probably less than 24 hours). Though there’s some features I’d like to add in t future (historical execution time, some charts, connection with AzureML to do prediction on expected execution times) it’s already usable. Of course I’ve tested it only on my development machine, so check twice before putting it in production but, give the fact that, virtually, there is not installation needed (you only need to install Python), and that all queries are based on standard SSISDB objects, I expect no big problems. If you want to test, contribute and/or give feedback please fell free to do it…I would really love to see this little project become a community project! Enjoy!

    Read the article

  • Why am I getting this SVN can't move .svn/tmp/x to trunk/x error?

    - by Alex Waters
    I am trying to checkout into the virtualbox shared folder with svn 1.7 in ubuntu 12.04 running as a guest on a windows 7 host. I had read that this error was a 1.6 problem, and updated - but am still receiving the error: svn: E000071: Can't move '/mnt/hostShare/code/www/.svn/tmp/svn-hsOG5X' to '/mnt/hostShare/code/www/trunk/statement.aspx?d=201108': Protocol error I found this blog post about the same error in a mac environment, but am finding that changing the folder/file permissions does nothing. vim .svn/entires just has the number 12 - does this need to be changed? Thank you for any assistance! (just another reason for why I prefer git...)

    Read the article

  • SOA Community Newsletter May 2014

    - by JuergenKress
    Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! If you can’t make it to Lisbon attend our SOA Suite 11c free on-demand Bootcamp or  Managing the Complexity of IoT online trainings. With more than 5000 customers, SOA Suite Achieves Significant Customer Adoption and Industry Recognition.Thanks to all our SOA Specialized partners for making our joins SOA customers successful! As a summary of the Industrial SOA series we published the Podcast Show Notes: SOA and Cloud - Where's This Relationship Going? Make sure you use the Oracle Demo Systems for your customer presentations. The demo systems are hosted by Oracle and include complete scenarios based on the latest Middleware version like the new B2B SOA Suite Demo System! For local presentations without fast internet use the SOA/BPM 11.1.1.7.1 Virtual Machine and Case Management Sample. At our SOA Community Workspace (SOA Community membership required) you can get new IoT presentations for Location Based Offers for Banking & Whitepaper and online Webcast & Utility presentation. In this newsletter you will find many articles about OSB: OSB 11g – A Hands-on Tutorial & Using Split-Joins in OSB Services for parallel processing of messages & OSB, Service Callouts and OQL & Working with Oracle Security Token Service. Thanks for sharing all the additional SOA articles within the community: How to configure Oracle SOA/BPM task auto release & Controlling BPEL process flow at runtime & Upgrading to Oracle SOA Suite 11g PS6 (11.1.1.7)? Do this. & BPEL and BPM's performance monitoring using DMS & SOA 11g - Create RESTful Service In Oracle SOA & Wrong timezone causes TopLink warning in SOA suite. Highlight of the BPM and ACM section is the IDC BPM vendor report. The new bundle Patch including the ACM UI is now available. If you want to learn more about ACM, get the ACM training material at our SOA Community Workspace (SOA Community membership required). A great demo for your next BPM presentation is the BPM iPad app. It’s simpleMobile BPM is Not An Option. It’s a Necessity. Thanks for sharing all the additional BPM articles within the community: BPM update adds Case Management Web Interface and REST APIs & Implementing deadline functionality with Oracle Adaptive Case Management & BPM 11g Timeout Heuristics & Humantask Assignment: Names and Expressions Assignment via Rules. In our last section Architecture, it is all about design. Usability is a key factor for customer satisfaction, worth to spend some time and read the Simplified User Experience Design Patterns eBook. Great blueprint for your project! See you in Lisbon! To read the newsletter please visit www.tinyurl.com/soaNewsMay2014 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Radeon Mobility HD 5470 Not Working

    - by Promather
    I recently bought a new HP DV6-3118SA laptop, but I am having a very discouraging problem with the graphics card. The graphics card is Radeon Mobility HD 5470. It doesn't install by default, but I do get some message suggesting to install the driver. If I install that driver, the next time I reboot, the screen goes blank and that's it! The same happens if I install the proprietary driver (fglrx) from ATI website. Could you please help me with this? EDIT: Following @Ronald and @Oli advice, I am dumping the output of lspci -k: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: agpgart-intel Kernel modules: intel-agp 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) Subsystem: Hewlett-Packard Company Device 144a 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: i2c-i801 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: intel ips Kernel modules: intel_ips 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: radeon Kernel modules: radeon 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 02:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe Subsystem: Hewlett-Packard Company Device 1453 Kernel driver in use: rt2800pci Kernel modules: rt2860sta, rt2800pci 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: r8169 Kernel modules: r8169 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a

    Read the article

  • Security Goes Underground

    - by BuckWoody
    You might not have heard of as many data breaches recently as in the past. As you’re probably aware, I call them out here as often as I can, especially the big ones in government and medical institutions, because I believe those can have lasting implications on a person’s life. I think that my data is personal – and I’ve seen the impact of someone having their identity stolen. It’s a brutal experience that I wouldn’t wish on anyone. So with all of that it stands to reason that I hold the data professionals to the highest standards on security. I think your first role is to ensure the data you have, number one because it can be so harmful, and number two because it isn’t yours. It belongs to the person that has that data. You might think I’m happy about that downturn in reported data losses. Well, I was, until I learned that companies have realized they suffer a lowering of their stock when they report it, but not when they don’t. So, since we all do what we are measured on, they don’t. So now, not only are they not protecting your information, they are hiding the fact that they are losing it. So take this as a personal challenge. Make sure you have a security audit on your data, and treat any breach like a personal failure. We’re the gatekeepers, so let’s keep the gates. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

  • Welcome to the Oracle FedApps blog

    - by jeffrey.waterman
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Congratulations, you have stumbled upon Oracle’s newest blog: The Federal Applications Blog. Periodically I plan to provide some insight into how Oracle’s application solutions are being applied, or how they can be applied, within the Federal Government. If you are a user of, or just interested in, Oracle’s applications in the Federal space and have questions/topics you would like to see addressed in this blog, please post a comment. So bear with me as I take a bit of time to refine the content, look and feel of this blog. http://www.oracle.com/us/industries/public-sector/038044.htm http://www.oracle.com/us/industries/public-sector/038046.htm -- JMW

    Read the article

  • Linux Mint Maya Freezes

    - by timuçin
    Linux Mint freezes after a couple of seconds the desktop loads, in a way that I have to shut the power in order to reboot; the mouse doesn't move, ctrl+alt+f1 doesn't do anything and I think even the hard disk stops. This doesn't happen every start but when it does happen, I have to start recovery mode and run the option "dpkg", the description is "repair broken packages" or something like that. If I don't do that and start the system normally the samething happens again. I have some clues that might help: The first time I installed Mint I had to install my wireless driver manualy. The system didn't freeze before this but since I installed the driver immediately after the Mint installation that might just easily be a coincidence. Even so after I discovered the dpkg trick, for the first couple of times I did it, I found my wireless driver uninstalled and I had to reinstall it. The thing is I can't be sure that the problem is my wireless driver because the relation is not direct enough. Still letting you know what my wireless apapter might help: Realtek L 8723 The next thing I am going to do is to wait until it happens again and post the system log here.

    Read the article

  • apt-get upgrade gives "403 forbidden" error

    - by 3l4ng
    I'm running Ubuntu 13.04 64b. sudo apt-get update works fine, but when I run sudo apt-get upgrade I get these errors: Err http://archive.ubuntu.com/ubuntu/ raring-updates/main python3.3-minimal amd64 3.3.1-1ubuntu5.2 403 Forbidden Err http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ raring/main gimp amd64 2.8.6-0raring1~ppa 403 Forbidden Err http://archive.ubuntu.com/ubuntu/ raring-security/main python3.3-minimal amd64 3.3.1-1ubuntu5.2 403 Forbidden Err http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/ raring/main gimp-help-en all 1:2.8-0raring16~ppa 403 Forbidden Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python3.3/python3.3-minimal_3.3.1-1ubuntu5.2_amd64.deb 403 Forbidden Failed to fetch http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/pool/main/g/gimp/gimp_2.8.6-0raring1~ppa_amd64.deb 403 Forbidden Failed to fetch http://ppa.launchpad.net/otto-kesselgulasch/gimp/ubuntu/pool/main/g/gimp-help/gimp-help-en_2.8-0raring16~ppa_all.deb 403 Forbidden E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Running sudo apt-get upgrade --fix-missing installs some updates, but the above errors still persist when I run apt-get upgrade again. The software update app shows the error: https://www.dropbox.com/s/2cr450557hmahzz/software_update.jpg and selecting continue shows: https://www.dropbox.com/s/l7u32sxyfbxxeeg/soft_upd2.jpg (sorry for the links, I don't have enough rep to post images) I am behind a proxy, but apt-get update and web browsing work without issues. I also do not believe a server being down is causing this, as the problem has been there over a month. Any ideas on how to fix this?

    Read the article

  • How do I convince my team that a requirements specification is unnecessary if we adopt user-stories?

    - by Nupul
    We are planning to adopt user-stories to capture stakeholder 'intent' in a lightweight fashion rather than a heavy SRS (software requirements specifications). However, it seems that though they understand the value of stories, there is still a desire to 'convert' the stories into an SRS-like language with all the attributes, priorities, input, outputs, source, destination etc. User-stories 'eliminate' the need for a formal SRS like artifact to begin with so what's the point in having an SRS? How should I convince my team (who are all very qualified CS folks by the way - both by education and practice) that the SRS would be 'eliminated' if we adopted user-stories for capturing the functional requirements of the system? (NFRs etc can be captured too, but that's not the intent of the question). So here's my 'work-flow' argument: Capture initial requirements as user-stories and later elaborate them to use-cases (which are required to be documented at a low level i.e. describing interactions with the UI prototypes/mockups and are a deliverable post deployment). Thus going from user-stories to use-cases rather than user-stories to SRS to use-cases. How are you all currently capturing user-stories at your workplace (if at all) and how do you suggest I 'make a case' for absence of SRS in presence of user-stories?

    Read the article

  • XNA Seeing through heightmap problem

    - by Jesse Emond
    I've recently started learning how to program in 3D with XNA and I've been trying to implement a Terrain3D class(a very simple height map). I've managed to draw a simple terrain, but I'm getting a weird bug where I can see through the terrain. This bug happens when I'm looking through a hill from the map. Here is a picture of what happens: I was wondering if this is a common mistake for starters and if any of you ever experienced the same problem and could tell me what I'm doing wrong. If it's not such an obvious problem, here is my Draw method: public override void Draw() { Parent.Engine.SpriteBatch.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); Camera3D cam = (Camera3D)Parent.Engine.Services.GetService(typeof(Camera3D)); if (cam == null) throw new Exception("Camera3D couldn't be found. Drawing a 3D terrain requires a 3D camera."); float triangleCount = indices.Length / 3f; basicEffect.Begin(); basicEffect.World = worldMatrix; basicEffect.View = cam.ViewMatrix; basicEffect.Projection = cam.ProjectionMatrix; basicEffect.VertexColorEnabled = true; Parent.Engine.GraphicsDevice.VertexDeclaration = new VertexDeclaration( Parent.Engine.GraphicsDevice, VertexPositionColor.VertexElements); foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Begin(); Parent.Engine.GraphicsDevice.Vertices[0].SetSource(vertexBuffer, 0, VertexPositionColor.SizeInBytes); Parent.Engine.GraphicsDevice.Indices = indexBuffer; Parent.Engine.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, (int)triangleCount); pass.End(); } basicEffect.End(); Parent.Engine.SpriteBatch.End(); } Parent is just a property holding the screen that the component belongs to. Engine is a property of that parent screen holding the engine that it belongs to. If I should post more code(like the initialization code), then just leave a comment and I will.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >