Search Results

Search found 559 results on 23 pages for 'decrease'.

Page 10/23 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Current Technologies

    - by Charles Cline
    I currently work at the University of Kansas (KU) and before that Stanford University, to be particular the Stanford Linear Accelerator Center (SLAC).  Collaborating with various Higher Ed institutions the past several years has shown a marked increase in the Microsoft side of the house.  To give you an idea of our current environment, here are some of the things we (Enterprise Systems) have been working on the past two years I’ve been at KU: Migrated from Novell to Active Directory (AD), although we’re still leveraging Novell for IDM.  We currently have 550,000+ objects in AD, and we still have several departments to bring in. Upgraded from Exchange 2003 to Exchange 2010 and Forefront Online Protection for Exchange (FOPE) Implemented SCCM 2007 for Windows systems management Implemented central file storage using EMC products for the backend, using CIFS as the frontend Restructuring AD domains and Forests to decrease the administrative overhead and provide a primary authentication mechanism for the entire University Determining Key Performance Indicators for AD and Exchange Implemented SCOM 2007 to monitor AD and Exchange Implemented Confluence for collaboration within IT and other technology providers at the University Implemented Data Protection Manager (DPM) for backup of AD and Exchange Built a test and QA environment to better facilitate upcoming changes to the environment Almost ready to raise the AD domain level to 2008 R2   I’m sure I’m missing things, and my next post will be some of the things we’re getting ready for – like Centrify to provide AD for OS X and Linux systems.  If anyone would like more info on a particular area, please drop me a line.  I’d be happy to discuss.

    Read the article

  • Ubuntu 9.0.4 Presario S4000NX Fan Speed

    - by Chris C
    I recently install Ubuntu 9.0.4 on a Presario S4000NX and the CPU fan speed is kept at max. With Windows XP installed the fan speed would increase/decrease as required. I've tried to install lm-sensors and run the sensors-detect. It recommended that I load the modules which I did: smsc47m192 i2c-i801 When running sensor-detect it gave me this strange message: Trying family SMSC Found SMSC LPC47M15x/192/997 Super IO Fan Sensors (but not activated) Running the sensors command gives me a list of voltages and CPU and temperature but doesn't list any fans. After doing some Internet research I then tried to load the smsc47m1 module but I get the following error: FATAL: Error inserting smsc47m1 (/lib/modules/2.6.28-15-generic/kernel/drivers/hwmon/smsc47m1.ko): no such device The file smsc47m1.ko does exist in the listed folder. Any suggestions for getting the fan speed (and the noise) down in Ubuntu? Thanx. - Chris P.S. - I would have put better tags but Server Fault wouldn't let me.

    Read the article

  • Carbonite has taken over my iMac

    - by Larry Rothfork
    I used Carbonite to back up 75GB on my iMac. I also created a folder on my iMac to copy files to, from an external hard drive and then use Carbonite to back up from there. And THEN thinking I had everything safely backed up and, in order to make room on my hard drive I DELETED some of those files, and instead of increasing disk space..my disk space has shrunk to 2GB... I know, I know..you can't use Carbonite like that, but now I have two questions. 1) What is the explanation for the decrease in disk space even though I have deleted about 20GB of those backed up files from my hard drive? It must have something to do with the way Carbonite references backed up files, And 2) Is there a way to extricate myself from this situation?

    Read the article

  • Carbonite has taken over my iMac

    - by Larry Rothfork
    I used Carbonite to back up 75GB on my iMac. I also created a folder on my iMac to copy files to, from an external hard drive and then use Carbonite to back up from there. And THEN thinking I had everything safely backed up and, in order to make room on my hard drive I DELETED some of those files, and instead of increasing disk space..my disk space has shrunk to 2GB... I know, I know..you can't use Carbonite like that, but now I have two questions. 1) What is the explanation for the decrease in disk space even though I have deleted about 20GB of those backed up files from my hard drive? It must have something to do with the way Carbonite references backed up files, And 2) Is there a way to extricate myself from this situation?

    Read the article

  • Huge google impression drop after cleaning html

    - by olgatorresfoundation
    Good morning, I am the webmaster of a non-profit organization that donates grants to colorectal cancer research projects and funds various colorectal cancer information campaigns. We have three domains: www.fundacioolgatorres dot org (Catalan) www.fundacionolgatorres dot org (Spanish) www.olgatorresfoundation dot org (English) So what happened? I redesigned olgatorresfoundation on the 20th and the fundacionolgatorres on the 30th of May. In both cases, exactly two days later, the number of impressions on both dropped to a halt. Granted, we did not have the traffic of Microsoft, but a 90% decrease a disaster of incredible proportions for us. My only real changes were cleaning up the old ineffective HTML to a cleaner form (mostly moving away from redundant table construction to a table-less view). Here is a before and after snapshot of what the change looks like: Before: http://www.fundacioolgatorres.org/aparell_digestiu/introduccio/ (unchanged page in Catalan) After: http://www.olgatorresfoundation.org/digestive_system/introduction/ (changed page in English) Anybody has a clue to what just happened? Why should a normal, sane html improvement be punished and so dramatically? No URLs have been changed, neither have page names or descriptions. Possible secondary question: If it is so that Google sees it as a major overhaul and decides to drop the pagerank sharply, does it come back to pre-change levels if the content "checks out" or will the page start over from scratch earning those pagerank points (which would mean that we would have to wait 6 months for the pages to recover to the level they had two weeks ago)? (duplicated from productforums.google dot com/forum/#!category-topic/webmasters/crawling-indexing--ranking/YsnyX0JzOpY, hoping to reach a wider audience)

    Read the article

  • Mac download speed keeps decreasing

    - by hatorade
    I have a Mac that is getting extremely low connection speed from my WiFi. The other 3 computers in this house have a fast connection. However, on this Mac, once I connect to WiFi it's fast, but as time goes on the speed decreases dramatically. I thought it was the browser or something (Safari) so I downloaded Firefox, but I have watched the download speed decrease consistently as time goes by and right now it's at 8kb/sec instead of the 60-200 range it started at. Any suggestions?

    Read the article

  • AWS Autoscaling issue with existing nodes in ELB

    - by Ram Prasad
    I already have a ELB setup called MyLoadBalancer. I already have 2 nodes running on it with health checks (that checks a URL on the node to see if they are up) Created an autoscaling group (min 2, Max 10) Associated launchconfig mylaunchconfig that provisions a node using an AMI Created a trigger, that checks for avg min connections of 100 and Max of 500 (checks the load balancer and it is support to increase the node count by 1, if avg connections are 500 and decrease by one if less than 100) as-create-or-update-trigger MyTrigger --auto-scaling-group MyAutoScalingGroup --namespace "AWS/ELB" --measure RequestCount --statistic Average --dimensions "LoadBalancerName=MyLoadBalancer" --period 60 --lower-threshold 500 --upper-threshold 800 --lower-breach-increment=-1 --upper-breach-increment=1 --breach-duration 600 Now the issue is, as soon as I put in the trigger, it start 2 nodes .... but there are already two nodes in the LB. So, why is it provisioning 2 more nodes, when the nodes are there ? is it because it is not recognizing the existing 2 nodes ? then how do I add the existing nodes to the AutoScaling group ?

    Read the article

  • ArchBeat Link-o-Rama for November 15, 2012

    - by Bob Rhubart
    WLST Starting and Stopping a WebLogic Environment | Rene van Wijk Oracle ACE Rene van Wijk explores how to start a server with as little input as possible. Developing and Enforcing a BYOD Policy | Darin Pendergraft Darin Pendergraft's post includes links to a recent Mobile Access Policy Survey by SANS as well as registration information for a Nov 15 webcast featuring security expert Tony DeLaGrange from Secure Ideas, SANS instructor, attorney and technology law expert Ben Wright, and Oracle IDM product manager Lee Howarth. Cloud Integration White Paper Now Available |Bruce Tierney Bruce Tierney shares an overview of Cloud Integration - A Comprehensive Solution, a new white paper he co-authored with David Baum, Rajesh Raheja, Bruce Tierney, and Vijay Pawar. My iPad & This Cloud Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why the Cloud is making it possible for him to use his iPad for tasks previously relegated to his laptop, and why this same scenario is likely to play out for a great many people. 3 steps to a cloud database strategy that works | InfoWorld "Every day, cloud-based databases add more features, decrease in cost, and become better at handling prime-time business," says InfoWorld blogger David Linthicum. "However, enterprise IT is reluctant to move data to public clouds, citing the tried-and-true excuses of security, privacy, and compliance. Although some have valid points, their reasons often boil down to 'I don't wanna.'" Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available | Elke Phelps "The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server," says Elke Phelps. "You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install)." Thought for the Day "A good plan executed today always beats a perfect plan executed tomorrow." — George S. Patton (November 11, 1885 - December 21, 1945) Source: SoftwareQuotes.com

    Read the article

  • Webcast: June 29th at 11am Eastern - Optimize ePermitting Reviews & Approvals with AutoVue

    - by Warren Baird
    I'm pleased to announce that the Enterprise Visualization special interest group (SIG) is organizing it's first webcast on June 29th - Palm Beach County is going to present how they use AutoVue as part of their e-permitting processes.  This is a must-see for anyone in the Public Sector, but even for people who aren't in the Public Sector, it should be very interesting to see how Palm Beach County has tied AutoVue tightly into their business processes.If you haven't already done so, I'd suggest joining up for our SIG at http://groups.google.com/group/enterprise_visualization_sig.The registration link for the webcast is: https://www3.gotomeeting.com/register/565294190 - more details are below:The Enterprise Visualization Special Interest Group (EVSIG) is proud to present the first in a series of webcasts designed to educate the AutoVue user community on innovative and compelling AutoVue solutions.  Attend the Webcast and discover how AutoVue can make building permit application and approval processes more efficient.Presenters:Oracle: Warren Baird, Principal Product Manager, AutoVue Enterprise VisualizationPalm Beach County: Paul Murphy, Systems IntegratorLaura Yonkers, Permit Section SupervisorChuck Lemon, Project Business AnalystAbstract:In their efforts to deliver better services to citizens, save money and “think green”, many cities, states and local governments have implemented online e-permitting processes that allow developers and citizens to apply for and receive building permits via the Web.Attend this webcast and discover how AutoVue visualization solutions enhance ePermitting processes by streamlining the review and approval of digital permit applications.  Hear from Palm Beach County about how they leveraging AutoVue within their ePermitting system to:·         provide structure to the land development review and approval process·         accelerate and improve efficiency throughout the permitting process·         decrease permit review times·         increase the level of transparency during the permit application and review process·         improve accountability in the organization·         improve citizen services by providing 24-7 ability to submit and track applicationsSign up for the Enterprise Visualization SIG to learn about future AutoVue Webcasts. Register today at http://groups.google.com/group/enterprise_visualization_sig and become a part of our growing online user community. We look forward to seeing you on the 29th of June.

    Read the article

  • Drivers for Quick Cam pro 9000 on Windows 7

    - by runaros
    I have a Logitect Quick Cam Pro 9000 that I want to install on my Windows 7 installation. The user manual specifies that I have to install the software before I install the camera. But my experience is that software from hardware vendors tend to decrease computer performance, so I was wondering if this camera will work by only plugging it in and letting Windows 7 find and install the drivers for it. I could've just tried installing it of course, but again, hardware vendors are notorius for fucking up things, and I wouldn't want to make it impossible to install the camera by doing things in the wrong order, hence the question here on SuperUser because I assume somebody is more knowledgeable than me on this subject.

    Read the article

  • Problem with xvideoservice thief

    - by Nrew
    Xvideo service thief is an application that allows you to download and convert youtube videos into .avi format. The problem is when it tries to convert the .flv into a .avi. The audio in the video is not being played when you try to play the video. But the .flv works fine. I have also enabled Intel(R) speedstep feature in my processor(Pentium Dual core E5200) to decrease the power consumed by the processor with the help of Granola software. What might be the reason for the audio less video converter by xvideo service thief? Could it be the enabling of intel(R) speedstep? Because the software works fine when its not enabled. Is this possible?That the output some applications can be altered when enabling this processor feature?

    Read the article

  • How Can You Get More Productive In Life Sciences Sales?

    - by charles.knapp
    Only half of all doctors will meet with pharmaceutical sales reps, and that percentage continues to decrease. Furthermore, when reps are granted an opportunity to share information, the average interaction is only about a minute and a half. Concurrently, call quotas continue to increase. What does this matter? Sales reps need to spend less time on traditional planning and after-call reporting, more time making calls, and make more productive use of short presentation times. Fortunately for sales reps, Oracle offers the first life sciences CRM that is designed to double sales time and halve reporting time. In particular, our new Life Sciences Edition Offline Client is designed so that you can actually turn the screen around, so that your CRM is useful for presentations and not just reporting, whether you are connected to cloud or working offline such as in restricted clinical environments. Watch Piers Evans, Industry Strategy Director, show what this looks like in the day of a typical pharmaceutical sales representative. By use of this code snippet, I agree to the Brightcove Publisher T and C found at https://accounts.brightcove.com/en/terms-and-conditions/. -- This script tag will cause the Brightcove Players defined above it to be created as soon as the line is read by the browser. If you wish to have the player instantiated only after the rest of the HTML is processed and the page load is complete, remove the line. -- brightcove.createExperiences();

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Bad 3D Performance in Ubuntu 12.04

    - by Pandem
    I already posted a question before but I didn't really get any advice/help. I'll be a bit more brief/general in hope it'll help. I have an MSI HD 7850 with the Catalyst 12.4 drivers installed. I've found that I'm having bad 3D performance for some reason but I'm not entirely sure what. I suspect it may just that the graphics card is new and AMD just need to work on their drivers but it would be nice to get advice and narrow the problem down so that I can be sure rather than wait for driver updates that may not even help. I ran gxlgears to give some general idea of how bad the performance is. At default size it is averaging around 2000 FPS. The command glxinfo confirms the renderer is using AMD Radeon HD 7800 Series with OpenGL version 4.2. Edits below: As asked for others: lspci -v output is here. fglrxinfo output is here xvinfo output is here glxinfo | grep rendering says yes for direct rendering. These confirmed that everything was configured correctly. Within Unity and Gnome Classic: glxgears had an FPS of around 2000 FPS fgl_glxgears had an FPS of around 544 FPS Within LDXE: glxgears had an FPS of around 4600 FPS fgl_glxgears had an FPS of around 1600 FPS In the end it was discovered that Compiz was causing a large performance decrease and solution was simply to change window manager for the time being. Thanks to TechZilla for all his help!

    Read the article

  • how much more memcache memory do i need to get 95% hit ratio? [on hold]

    - by OneSolitaryNoob
    I have a memcache instance running that has a 90% hit ratio. How can I estimate how much more memory it needs to get to a 95% hit ratio? edit: This question was blocked, but I do not think this is impossible to answer. After all, anyone that's used a caching system HAS answered this question, most likely with trial&error&luck. I can look at my usage patterns. I can increase or decrease memory and see how hit rate changes. Both of these provide data that informs an estimate. But what's a good/better/best way to do this?

    Read the article

  • Big AdventureWorks2012

    - by jamiet
    Last week I launched AdventureWorks on Azure, an initiative to make SQL Azure accessible to anyone, in my blog post AdventureWorks2012 now available for all on SQL Azure. Since then I think its fair to say that the reaction has been lukewarm with 31 insertions into the [dbo].[SqlFamily] table and only 8 donations via PayPal to support it; on the other hand those 8 donators have been incredibly generous and we nearly have enough in the bank to cover a full year’s worth of availability. It was always my intention to try and make this offering more appealing and to that end I have used an adapted version of Adam Machanic’s make_big_adventure.sql script to massively increase the amount of data in the database and give the community more scope to really push SQL Azure and see what it is capable of. There are now two new tables in the database: [dbo].[bigProduct] with 25200 rows [dbo].[bigTransactionHistory] with 7827579 rows The credentials to login and use AdventureWorks on Azure are as they were before: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly Remember, if you want to support AdventureWorks on Azure simply click here to launch a pre-populated PayPal Send Money form - all you have to do is login, fill in an amount, and click Send. We need more donations to keep this up and running so if you think this is useful and worth supporting, please please donate.   I mentioned that I had to adapt Adam’s script, the main reasons being: Cross-database queries are not yet supported in SQL Azure so I had to create a local copy of [dbo].[spt_values] rather than reference that in [master] SELECT…INTO is not supported in SQL Azure The 1GB limit of SQLAzure web edition meant that there would not be enough space to store all the data generated by Adam’s script so I had to decrease the total number of rows. The amended script is available on my SkyDrive at https://skydrive.live.com/redir.aspx?cid=550f681dad532637&resid=550F681DAD532637!16756&parid=550F681DAD532637!16755 @Jamiet

    Read the article

  • Back up of Streaming server

    - by Maxwell
    I want to take a new streaming server for my website which generally holds videos and audio files. But how do we maintain backup of the streaming server if storage size is increasing day by day. Generally on Database servers, like Sql Server, backups can be easily taken and restored very easily as they do not occupy much space for medium range applications. On the other hand how can we take backup of streaming server? If the server fails, the there should be an alternative server / solution that should decrease downtime of the server. How is the back-end architecture of YouTube built to handle this?

    Read the article

  • Bad temperature sensors on Foxconn motherboard?

    - by Gawain
    I have a system with a Foxconn V400 series motherboard and AMD Athlon 3000+ processor. Ever since I got it a few years ago the fans (particularly the CPU fan) have been really loud. So recently I installed SpeedFan to see why they were running so fast. SpeedFan reported the CPU temperature to be 32C, and one motherboard sensor at about 26C. But the other two motherboard sensors were reporting 78C and 64C respectively. Naturally the fans were both maxed out because of this, with the CPU fan at 5800rpm and the case fan at 2400rpm. I opened the case and everything inside was literally cool to the touch, with the exception of the CPU heatsink which was slightly warm, but nowhere near 78C. It seems like the temperature sensors are either defective or being read incorrectly. Is there some way I can decrease my fan noise without risking damage to my processor? Some way to ignore those two temp sensors? Any help would be greatly appreciated.

    Read the article

  • Interfaces and Virtuals Everywhere????

    - by David V. Corbin
    First a disclaimer; this post is about micro-optimization of C# programs and does not apply to most common scenarios - but when it does, it is important to know. Many developers are in the habit of declaring member virtual to allow for future expansion or using interface based designs1. Few of these developers think about what the runtime performance impact of this decision is. A simple test will show that this decision can have a serious impact. For our purposes, we used a simple loop to time the execution of 1 billion calls to both non-virtual and virtual implementations of a method that took no parameters and had a void return type: Direct Call:     1.5uS Virtual Call:   13.0uS The overhead of the call increased by nearly an order of magnitude! Once again, it is important to realize that if the method does anything of significance then this ratio drops quite quickly. If the method does just 1mS of work, then the differential only accounts for a 1% decrease in performance. Additionally the method in question must be called thousands of times in order to produce a meaqsurable impact at the application level. Yet let us consider a situation such as the per-pixel processing of a graphics processing application. Here we may have a method which is called millions of times and even the slightest increase in overhead can have significant ramification. In this case using either explicit virtuals or interface based constructs is likely to be a mistake. In conclusion, good design principles should always be the driving force behind descisions such as these; but remember that these decisions do not come for free.   1) When a concrete class member implements an interface it does not need to be explicitly marked as virtual (unless, of course, it is to be overriden in a derived concerete class). Nevertheless, when accessed via the interface it behaves exactly as if it had been marked as virtual.

    Read the article

  • Ubuntu Karmic, Maximize delay problem

    - by Loukas990
    Hello, I have tried various ubuntu versions, Ubuntu Karmic Koala 9.10 x86, Ubuntu 10.04 beta DVD, Ubuntu 9.10 AMD64, Mandriva 2010 x86, also a Fedora version and after installing compiz on all OS's, everything is working normal and smooth, even the compiz effects, cube, water, fire, wobbling windows, etc, except of the Maximize effect, where i get 2seconds+ delay for the effect. Cause it got me in a lot of thinking, why im getting this delay "problem", do you think there is any solution? or at least a way to decrease the delay? Thanks in advance. ;) P.S. My laptop is a Toshiba Satellite P300d, AMD Turion 64 x2 Dual-Core TL-64 (2.20 GHz), ATI Mobility Radeon™ HD 3650 512MB, 3GB RAM.

    Read the article

  • How to increase the number of items in the "recent" folder in Windows 7?

    - by netvope
    Windows 7 keeps a list of recently used files in C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Recent. Based on my observation, it keeps 10 items for each file extension. So when you So open 11 .txt files in a row, the 2nd to the 11th items will stay in that Recent folder, but the first one will be gone. My question is: How to keep an unlimited number of items in that Recent folder? Note: Increasing the per-application recent items (e.g. as in http://www.mydigitallife.info/2009/05/21/change-increase-or-decrease-number-of-recent-or-frequent-items-displayed-in-windows-7-taskbar-jump-list/ ) has no effect on the per-user Recent item folder that I'm concerning.

    Read the article

  • Automate #include refactoring in C++ [on hold]

    - by Mikhail
    I have a big project with hundreds of files. And as it often happens to C++ projects, #include directives are in messed up. I want to refactor them to increase clarity, decrease compilation time and simplify analysis. For each .h file I want to make sure that: It have #include directives only for types it is using But it have only forward declarations of types that are used as T* or T& For each .cpp file I want to make sure that: It have #include directives only for types it is using and not already included by another headers (no indirect includes when possible) I'm looking for a tool which will help me to automate this refactoring. For now I only know of tools that helps to remove redundant includes, they are many: PC-lint include-what-you-use cppclean ProFactor IncludeManager But I know of no tools to help me to move necessary includes in .h files or replace includes with forward declarations. Any ideas? Tools for Windows and Visual Studio are preferred. Update. Considered to be off-topic. Please, follow the link on Software Recommendations http://softwarerecs.stackexchange.com/q/4461/3331

    Read the article

  • dhcpd pool exhaustion - What's the result?

    - by jarmund
    I have a DHCP server that serves leases to several houndred, maybe up to a thousand, different clients on an average day. The pool consists of 242 IPs, and due to the highly dynamic nature of the network, it's enough 99% of the time (most devices are gone from the network in a few minutes), despite having a lease time of 3600. Now, imagine if more clients than that connect to the network during an hour. The sollution is obvious: Decrease lease time, or increase the DHCP pool, however, what i would like to know: What happens when dhcpd has exhausted the pool? Are new DHCP requests simply ignored?

    Read the article

  • Why do my download speeds drastically vary during a download?

    - by J. Anthony Carter
    I watch the download speed rise and fall like waves in a storm. At night, during low bandwidth usage I have achieve speeds as high as 3.23 M/sec but the watch them decline to 250 K/sec. and then climb back up. Over and over. During the day my best is around 1.67 M/sec with lows into the 65 K/sec. On top of this, why does a download need to slow down when approaching the end of the download? It's not like a multi-hundred ton train needing to decrease speed as it approaches the station.

    Read the article

  • Z-order with Alpha blending in a 3D world

    - by user41765
    I'm working on a game in a 3D world with 2D sprites only (like Don't Starve game). (OpenGL ES2 with C++) Currently, I'm ordering elements back to front before drawing them without batch (so 1 element = 1 drawcall). I would like to implement batching in my framework to decrease draw calls. Here is what I've got for the moment: Order all elements of my scene back to front. Send order list of elements to the Renderer. Renderer look in his batch manager if a batch exist for the given element with his Material. Batch didn't exist: create a new one. Batch exist for element with this Material: Add sprite to the batch. Compute big mesh with all sprite for each batch (1 material type = 1 batch). When all batches are ok, the batch manager compute draw commands for the renderer. Renderer process draw commands (bind shader, bind textures, bind buffers, draw element) Image with my problem here: Explication here But I've got some problems because objects can be behind another objects inside another batch. How can I do something like that? Thanks!

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >