Search Results

Search found 9992 results on 400 pages for 'space efficiency'.

Page 162/400 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • How To Enlarge a Virtual Machine’s Disk in VirtualBox or VMware

    - by Chris Hoffman
    When you create a virtual hard disk in VirtualBox or VMware, you specify a maximum disk size. If you want more space on your virtual machine’s hard disk later, you’ll have to enlarge the virtual hard disk and partition. Note that you may want to back up your virtual hard disk file before performing these operations – there’s always a chance something can go wrong, so it’s always good to have backups. However, the process worked fine for us. Image Credit: flickrsven How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • European e-government Action Plan all about interoperability

    - by trond-arne.undheim
    Yesterday, the European Commission released its European eGovernment Action Plan for 2011-2015. The plan includes measures on providing deeper user empowerment, enhancing the Internal Market, more efficiency and effectiveness of public administrations, and putting in place pre-conditions for developing e-government. The Good - Defines interoperability very clearly. Calls interoperability "a pre-condition for cross-border eGovernment services" (a very strong formulation) and says interoperability "is supported by open specifications". - Uses the terminology "open specifications" which, let's face it, is pretty close to "open standards" which is the term the rest of the world would use. - Confirms that Member States are fully committed to the political priorities of the Malmö Declaration (which was all about open standards) including the very strong action: by 2013: All Member States will have incorporated the political priorities of the Malmö Declaration in their national strategies. Such tight Action Plan integration between Commission and Member State priorities has seldom been attempted before, particularly not in a field where European legal competence is virtually non-existent. What we see now, is the subtle force of soft power rather than the rough force of regulation. In this case, it is the Member States who want Europe to take the lead. Very refreshing! Some quotes that show the commitment to interoperability and open specifications: "The emergence of innovative technologies such as "service-oriented architectures" (SOA), or "clouds" of services,  together with more open specifications which allow for greater sharing, re-use and interoperability reinforce the ability of ICT to play a key role in this quest for effficiency in the public sector." (p.4) "Interoperability is supported through open specifications" (p.13) 2.4.1. Open Specifications and Interoperability (p.13 has a whole section dedicated to this important topic. Open specifications and interoperability are nearly 100% interrelated): "Interoperability is the ability of systems and machines to exchange, process and correctly interpret information. It is more than just a technical challenge, as it also involves legal, organisational and semantic aspects of handling  data" (p.13) "standards and  open platforms offer opportunities for more cost-effective use of resources and delivery of services" (p.13). The Bad Shies away from defining open standards, or even open specifications, the EU's preferred term for the key enabler of interoperability. Verdict 90/100, a very respectable score.

    Read the article

  • What is the most efficient way to blur in a shader?

    - by concernedcitizen
    I'm currently working on screen space reflections. I have perfectly reflective mirror-like surfaces working, and I now need to use a blur to make the reflection on surfaces with a low specular gloss value look more diffuse. I'm having difficulty deciding how to apply the blur, though. My first idea was to just sample a lower mip level of the screen rendertarget. However, the rendertarget uses SurfaceFormat.HalfVector4 (for HDR effects), which means XNA won't allow linear filtering. Point filtering looks horrible and really doesn't give the visual cue that I want. I've thought about using some kind of Box/Gaussian blur, but this would not be ideal. I've already thrashed the texture cache in the raymarching phase before the blur even occurs (a worst case reflection could be 32 samples per pixel), and the blur kernel to make the reflections look sufficiently diffuse would be fairly large. Does anyone have any suggestions? I know it's doable, as Photon Workshop achieved the effect in Unity.

    Read the article

  • Apache Prefork Configuration

    - by user1618606
    I'm newbie on VPS configuration. So, I've installed apache, php and mysql and now I need to know how to configure Prefork to optimize Apache. The system configuration is: CPU Cores 2 x 2 Ghz @ 4 Ghz RAM Memory 2304 MB DDR3 Burst Memory 3 GB DDR3 Disk Space 30 GB SSD Bandwidth 3 TB SwitchPort 1 Gbps Actually, after linux, mysql, apache and php, there are 250 MB memory in use. Well, I don't have idea to calculate. I saw in some websistes, some vars like: KeepAlive On KeepAliveTimeout 1 MaxKeepAliveRequests 100 StartServers 15 MinSpareServers 15 MaxSpareServers 15 MaxClients 20 MaxRequestsPerChild 0 or StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 How I could to do: Prefork or worker? Where and how the vars are placed? In httpd.conf?

    Read the article

  • Easy to use cross-platform 3D engines for C++ game development?

    - by davr
    I want to try my hand at writing a 3D game. However I don't want to start at such a low level of drawing individual triangles and writing my own 3D object loader and so on. I've heard of things like Irrlicht, Crystal Space 3D, and Cafu, but I don't have any experience with any of them. I'm looking for suggestions from people who have experience with these or other engines on which ones are well written, and are easy to get started using, without having to learn a ton of 3D math theory and how GPU's work internally.

    Read the article

  • Predicting advantages of database denormalization

    - by Janus Troelsen
    I was always taught to strive for the highest Normal Form of database normalization, and we were taught Bernstein's Synthesis algorithm to achieve 3NF. This is all very well and it feels nice to normalize your database, knowing that fields can be modified while retaining consistency. However, performance may suffer. That's why I am wondering whether there is any way to predict the speedup/slowdown when denormalizing. That way, you can build your list of FD's featuring 3NF and then denormalize as little as possible. I imagine that denormalizing too much would waste space and time, because e.g. giant blobs are duplicated or it because harder to maintain consistency because you have to update multiple fields using a transaction. Summary: Given a 3NF FD set, and a set of queries, how do I predict the speedup/slowdown of denormalization? Link to papers appreciated too.

    Read the article

  • Objective - Gestures while finger touches screen

    - by marcg11
    I'm creating a space cocos2d game with objetive-c. I have in the bottom left 2 arrows to move the sprite left or right. I also implemented a swipe gesture to change weapon, however it only happens when I'm not touching the screen. I would like the player to change weapons while he's moving the sprite and not have to lift the finger from the arrows and stop moving the sprite to change weapons. Is there any way I can detect Gestures while having a finger pressed un a buton in thes screen?

    Read the article

  • links for 2010-04-22

    - by Bob Rhubart
    Barry N. Perkins: Unique Business Value vs. Unique IT "Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution." -- Barry N. Perkins, VP Oracle Modernization & Oracle Integrated Solutions (tags: oracle otn enterprisearchitecture modernization) Paul Homchick: The Information Driven Value Chain - Part 2 Paul Homchick continues his series with a look "at the way investments have been made in enterprise software in an effort to create and manage value, and how systems are moving from a controlled-process approach design towards gathering and using dynamically using information." (tags: oracle otn enterprisearchitecture) @vambenepe: The battle of the Cloud Frameworks: Application Servers redux? "The battle of the Cloud Frameworks has started," says William Vambenepe, "and it will look a lot like the battle of the Application Servers which played out over the last decade and a half." (tags: oracle otn cloud frameworks appserver) @ORACLENERD: COLLABORATE: Day 4 Wrap Up Oraclenerd feesses up: "The day started out with the realization that I pulled off the best (COLLABORATE - self annointed) prank ever. Twitter was...all atwitter about the fact that Mark Rittman was Oracle's Person of the Year. Of course it wasn't true. If you look at the picture, you'll realize that he's wearing exactly the same clothes in the magazine cover as he is in real life." (tags: collaborate2010 oracleace) Oracle's Hal Stern at Cloud Expo: "We've Moved from 'What' to 'How'" | Cloud Computing Journal "Hal also spoke a bit about building 'a sustainable IT model.' By this, he said he didn't mean the various Green IT and similar efforts that 'are all about data center efficiency. I think the operational model is just as important. Many enterprises are managing a tremendous amount of complexity, and it's hard to make this sustainable.'" -- Cloud News Desk (tags: oracle cloud cloudexpo halstern) @ORACLENERD: COLLABORATE: The Beach Party "Then tiki statues somehow were incorporated into various dances" -- Oracle ACE Chet "oraclenerd" Justice (tags: 0racle otn oracleace collaborate2010 oaug ioug lasvegas) David Andrews: Collaborate Day Two "Collaborate 2010 has focused on helping attendees understand what is already available and how to make more effective use of it. This does not sound exciting but it is extremely valuable. Most customers use only a small fraction of the capability of the products they already own. Helping them understand all the additional things they could be doing without buying anything more is very valuable." -- David Andrews (tags: oracle oaug collaborate2010 ioug)

    Read the article

  • New Year's resolution 2012

    Same procedure as every year... Hundreds of thousands of people have their annual new year's resolution to begin the new year. And so am I. My resolution for 2012: Writing more blog articles (again). Actually, it's quite difficult to find to proper time and space to write up an article for any kind of blog, newspaper or magazine. Especially, when you are very busy with daily work and fulfilling customers demands with very tight schedules. But seriously, I'll try to keep it up with at least one or two articles per month during 2012. There are quite some good topics to write about in the queue. Cheers, JoKi

    Read the article

  • Boston: Free Java Developer Event March 3rd!

    - by Jacob Lehrbaum
    Attention Boston area developers!  Oracle has been running a series of free one-day Java Developer events in the US, Europe, and Asia since last November, and on March 3rd, this highly popular series is coming to the Westin Copley Place in Boston.  The Java Developer Day will include four tracks of sessions and hands-on-labs designed for developers interested in Server, Desktop, Embedded, and core Java SE platform topics.  Technologies covered include Java EE, Java ME and Java SE (including the JDK).  From the event page: Come to this free event if you are interested in:Evaluating the Java platformUsing other languages on the JVMBuilding server side JavaConstructing Rich Web or Desktop ApplicationsUnderstanding the JVM and its built in diagnosticsMaking Smart Devices even smarterCheck out the event page to read more and/or register.  The event is free, but space is limited so register today!

    Read the article

  • Talent Management in Aerospace & Defense this Thursday, April 8th

    - by jay.richey
    While many industries struggle to recover from one of the most devastating recessions in history, the aerospace and defense industry plans for record growth. And key to that growth is better management of the workforce. A&D companies are currently faced with a multitude of workforce challenges including an aging and retiring workforce, knowledge gaps created as the workforce leaves, a surge in use of contingent workers, and antiquated work environments and practices that make it difficult to attract the next generation of workers. If you are in the DC area, register to attend the Oracle Aerospace and Defense Contractors Summit in Reston this Thursday, April 8th from 8am-5pm and hear Jay Richey, Oracle HCM Applications Product Marketing Director, discuss trends in the A&D talent space and smart strategies on retaining that talent. You will also hear Accenture discuss their recent survey results - Keys to Managing Human Capital within the A&D Enterprise. Register today at http://www.oracle.com/dm/10q3field/43453_ev_oracle_aerospace_apr10.html

    Read the article

  • Ray-plane intersection to find the Z of the intersecting point

    - by Jenko
    I have a rectangle in 3d space (p1, p2, p3, p4) and when the mouse rolls over it I need to calculate the exact Z of the point on the rect, given the mouse coordinates (x, y). Would a Ray-plane intersection find the Z of the intersecting point? Edit: Would this one-liner work? .. it returns the t value for the intersection, apparently the Z value. float rayPlane(vector3D planepoint, vector3D normal, vector3D origin, vector3D direction){ return -((origin-planepoint).dot(normal))/(direction.dot(normal)); }

    Read the article

  • March 2010 Chicago Architects Group Wrap Up

    - by Tim Murphy
    I would like to thank everyone who came out to last night’s event and especially thank Mike Vogt for the presentation. I think at first everyone glassed over since very few of us spend a lot of time with Integration Architecture and most of us live more in the application architecture space.  Learning about subject like BPEL and BPMN was refreshing. The discussion after Mike’s talk was lively and I think that everyone came away with a good idea of areas they might want to know more about.  People stuck around long after the meeting was over. If you are interested in the topic you can find the slides here. Be sure to join us next month when Matt Hidinger talks about Onion Architecture.  Details are coming soon. del.icio.us Tags: CAG,Chicago Architects Group

    Read the article

  • Is the science of Computer Science dead?

    - by Veaviticus
    Question : Is the science and art of CS dead? By that I mean, the real requirements to think, plan and efficiently solve problems seems to be falling away from CS these days. The field seems to be lowering the entry-barrier so more people can 'program' without having to learn how to truly program. Background : I'm a recent graduate with a BS in Computer Science. I'm working a starting position at a decent sized company in the IT department. I mostly do .NET and other Microsoft technologies at my job, but before this I've done Java stuff through internships and the like. I personally am a C++ programmer for my own for-fun projects. In Depth : Through the work I've been doing, it seems to me that the intense disciplines of a real science don't exist in CS anymore. In the past, programmers had to solve problems efficiently in order for systems to be robust and quick. But now, with the prevailing technologies like .NET, Java and scripting languages, it seems like efficiency and robustness have been traded for ease of development. Most of the colleagues that I work with don't even have degrees in Computer Science. Most graduated with Electrical Engineering degrees, a few with Software Engineering, even some who came from tech schools without a 4 year program. Yet they get by just fine without having the technical background of CS, without having studied theories and algorithms, without having any regard for making an elegant solution (they just go for the easiest, cheapest solution). The company pushes us to use Microsoft technologies, which take all the real thought out of the matter and replace it with libraries and tools that can auto-build your project for you half the time. I'm not trying to hate on the languages, I understand that they serve a purpose and do it well, but when your employees don't know how a hash-table works, and use the wrong sorting methods, or run SQL commands that are horribly inefficient (but get the job done in an acceptable time), it feels like more effort is being put into developing technologies that coddle new 'programmers' rather than actually teaching people how to do things right. I am interested in making efficient and, in my opinion, beautiful programs. If there is a better way to do it, I'd rather go back and refactor it than let it slide. But in the corporate world, they push me to complete tasks quickly rather than elegantly. And that really bugs me. Is this what I'm going to be looking forward to the rest of my life? Are there still positions out there for people who love the science and art of CS rather than just the paycheck? And on the same note, here's a good read if you haven't seen it before The Perils Of Java Schools

    Read the article

  • The Unspoken - The Why of GC Ergonomics

    - by jonthecollector
    Do you use GC ergonomics, -XX:+UseAdaptiveSizePolicy, with the UseParallelGC collector? The jist of GC ergonomics for that collector is that it tries to grow or shrink the heap to meet a specified goal. The goals that you can choose are maximum pause time and/or throughput. Don't get too excited there. I'm speaking about UseParallelGC (the throughput collector) so there are definite limits to what pause goals can be achieved. When you say out loud "I don't care about pause times, give me the best throughput I can get" and then say to yourself "Well, maybe 10 seconds really is too long", then think about a pause time goal. By default there is no pause time goal and the throughput goal is high (98% of the time doing application work and 2% of the time doing GC work). You can get more details on this in my very first blog. GC ergonomics The UseG1GC has its own version of GC ergonomics, but I'll be talking only about the UseParallelGC version. If you use this option and wanted to know what it (GC ergonomics) was thinking, try -XX:AdaptiveSizePolicyOutputInterval=1 This will print out information every i-th GC (above i is 1) about what the GC ergonomics to trying to do. For example, UseAdaptiveSizePolicy actions to meet *** throughput goal *** GC overhead (%) Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) Tenuring threshold: (attempted to decrease to balance GC costs) = 1 GC ergonomics tries to meet (in order) Pause time goal Throughput goal Minimum footprint The first line says that it's trying to meet the throughput goal. UseAdaptiveSizePolicy actions to meet *** throughput goal *** This run has the default pause time goal (i.e., no pause time goal) so it is trying to reach a 98% throughput. The lines Young generation: 16.10 (attempted to grow) Tenured generation: 4.67 (attempted to grow) say that we're currently spending about 16% of the time doing young GC's and about 5% of the time doing full GC's. These percentages are a decaying, weighted average (earlier contributions to the average are given less weight). The source code is available as part of the OpenJDK so you can take a look at it if you want the exact definition. GC ergonomics is trying to increase the throughput by growing the heap (so says the "attempted to grow"). The last line Tenuring threshold: (attempted to decrease to balance GC costs) = 1 says that the ergonomics is trying to balance the GC times between young GC's and full GC's by decreasing the tenuring threshold. During a young collection the younger objects are copied to the survivor spaces while the older objects are copied to the tenured generation. Younger and older are defined by the tenuring threshold. If the tenuring threshold hold is 4, an object that has survived fewer than 4 young collections (and has remained in the young generation by being copied to the part of the young generation called a survivor space) it is younger and copied again to a survivor space. If it has survived 4 or more young collections, it is older and gets copied to the tenured generation. A lower tenuring threshold moves objects more eagerly to the tenured generation and, conversely a higher tenuring threshold keeps copying objects between survivor spaces longer. The tenuring threshold varies dynamically with the UseParallelGC collector. That is different than our other collectors which have a static tenuring threshold. GC ergonomics tries to balance the amount of work done by the young GC's and the full GC's by varying the tenuring threshold. Want more work done in the young GC's? Keep objects longer in the survivor spaces by increasing the tenuring threshold. This is an example of the output when GC ergonomics is trying to achieve a pause time goal UseAdaptiveSizePolicy actions to meet *** pause time goal *** GC overhead (%) Young generation: 20.74 (no change) Tenured generation: 31.70 (attempted to shrink) The pause goal was set at 50 millisecs and the last GC was 0.415: [Full GC (Ergonomics) [PSYoungGen: 2048K-0K(26624K)] [ParOldGen: 26095K-9711K(28992K)] 28143K-9711K(55616K), [Metaspace: 1719K-1719K(2473K/6528K)], 0.0758940 secs] [Times: user=0.28 sys=0.00, real=0.08 secs] The full collection took about 76 millisecs so GC ergonomics wants to shrink the tenured generation to reduce that pause time. The previous young GC was 0.346: [GC (Allocation Failure) [PSYoungGen: 26624K-2048K(26624K)] 40547K-22223K(56768K), 0.0136501 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] so the pause time there was about 14 millisecs so no changes are needed. If trying to meet a pause time goal, the generations are typically shrunk. With a pause time goal in play, watch the GC overhead numbers and you will usually see the cost of setting a pause time goal (i.e., throughput goes down). If the pause goal is too low, you won't achieve your pause time goal and you will spend all your time doing GC. GC ergonomics is meant to be simple because it is meant to be used by anyone. It was not meant to be mysterious and so this output was added. If you don't like what GC ergonomics is doing, you can turn it off with -XX:-UseAdaptiveSizePolicy, but be pre-warned that you have to manage the size of the generations explicitly. If UseAdaptiveSizePolicy is turned off, the heap does not grow. The size of the heap (and the generations) at the start of execution is always the size of the heap. I don't like that and tried to fix it once (with some help from an OpenJDK contributor) but it unfortunately never made it out the door. I still have hope though. Just a side note. With the default throughput goal of 98% the heap often grows to it's maximum value and stays there. Definitely reduce the throughput goal if footprint is important. Start with -XX:GCTimeRatio=4 for a more modest throughput goal (%20 of the time spent in GC). A higher value means a smaller amount of time in GC (as the throughput goal).

    Read the article

  • Oracle Unveils Oracle Fusion Tap for the iPad

    - by Richard Lefebvre
    Oracle Fusion Tap: Productivity Amplified Anywhere, Anytime Oracle today announced the availability of Oracle Fusion Tap, a native iPad application that redefines the level of productivity users can achieve while on-the-go.   Oracle Fusion Tap runs off cloud-based enterprise applications and across Oracle Application Cloud Services, requiring only one simple Apple App Store installation.   Automatically personalized to each user, Oracle Fusion Tap gives users exactly what they need at their fingertips and provides the long-sought, key functionalities to remain productive and to keep business moving, even when away from the desk.   Designed specifically for the iPad and the mobile workforce, Oracle Fusion Tap provides access with or without an Internet connection.   By grouping functional capabilities into three core areas of "connect," "analyze," and "work," users can easily and directly connect with what they need in the app, complete activities, and move on.   As organizations strive for a lean and agile workforce, Oracle Fusion Tap helps users find and make connections with the right people at the right time, obtaining answers to questions quickly and removing roadblocks faster.   Oracle Fusion Tap also provides users with secure access to actionable performance indicators and day-to-day management of their workforce and sales force automation. Supporting Quotes "Both the enterprise and technology providers must recognize the need to innovate and adapt for the increasing mobility of the workforce—not just for sales teams, but across the organization," said Carter Lusher, Research Fellow and Chief Analyst of Enterprise Applications Ecosystem, Ovum. "A mobile application that quickly and powerfully allows employees to make connections, analyze data, and complete activities at any time and wherever they may be located drives new levels of business value and enhances efficiency. Frankly, mobile access is no longer a 'nice to have' but a 'must have.'"   "The mobile workforce is a business reality, and Oracle Fusion Tap is an example of how Oracle delivers mobile and cloud innovations that fundamentally improve productivity and how we work," said Chris Leone, Senior Vice President of Application Development, Oracle. "With Oracle Fusion Tap users will have an all-in-one, easily extensible app that puts mission-critical data and colleague connection at their fingertips." Supporting Resources Oracle Fusion Tap Oracle Fusion Tap on App Store Oracle Fusion Tap YouTube Video Oracle CRM on Social Media @OracleCRM OracleCRM on Facebook OracleCRM on YouTube

    Read the article

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

  • Live Webcast for Skire Customers - 8 November

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Join our Important Customer Briefing live webcast with Oracle Executive Mike Sicilia to learn more about the product strategy for the combined Oracle Primavera and Skire offering. Mike Sicilia, Senior Vice President and General Manager, Oracle Primavera Global Business Unit, invites you to join him for an exclusive update on Oracle’s acquisition of substantially all of Skire’s assets. Don’t miss this special, live webcast on November 8th, Attend this online event and listen to Mike Sicilia share with you: The strategic reasons behind Oracle’s acquisition of substantially all of Skire’s assets and what it means to you and your organization Oracle’s vision to deliver the most comprehensive Enterprise Project Portfolio Management (EPPM) offering to manage the complete project lifecycle, from capital planning and construction to operations and maintenance Exciting new product releases to help organizations manage their projects and facilities with more predictability and financial control, improving profitability and operational efficiency. Oracle’s consistent commitment to customer success and product support Save your seat: register now to attend this exclusive online event and learn how the combination of Oracle Primavera and Skire can help your organization succeed. For more information about the combination of Oracle and Skire, please visit oracle.com/skire

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • R2 and Idera Idera SQL Safe (Freeware Edition)

    - by DavidWimbush
    Good news: the Freeware edition of Idera SQL Safe works on R2. You might not care but I certainly do. Here's why:  In September last year I started using Idera SQL Safe (the Freeware Edition) to get backup compression on my SQL 2005 servers. It seemed like a good idea at the time - it was free and my backups ran much faster and took up much less disk space. I really thought I'd actually scored a free lunch. Until they discontinued the product. I was thinking about what to do when I heard that R2 Standard would include native backup compression so I've just been keeping my fingers crossed since then. So I installed R2 Developer on my laptop, installed SQL Safe and kicked off a restore with it. No problem. Phew! Now I won't have to do a special, non-compressed backup and restore when we migrate.

    Read the article

  • My First Post @ geekswithblogs

    - by sathya
    Dear Friends, Here is my first post on geekswithblogs. I am happy that I have got a separate space here to blog. I am an MCTS certified Professional in .Net 2.0 Web applications, working as a Senior Software Engineer. Willing to share my knowledge on all topics whatever I know. I am also an active presenter / speaker in Microsoft Developer User Group HyderabadTechies. And I have presented many online sessions there. I keep myself updated on the latest technologies in Microsoft. You can see my posts here on the following subjects : C# ASP.NET SQL Server SQL Server Integration Services (SSIS) SQL Server Analysis Services (SSAS) SQL Server Reporting Services (SSRS) I have a personal blog too where I share my knowledge. Pls take a note of it. http://cybersathya.blogspot.com You can see me here often posting the updates on technologies and the technical challenges that I faced and the solutions for the same. Stay Tuned !!! Regards Sathya Narayanan Srinivasan

    Read the article

  • links for 2011-03-08

    - by Bob Rhubart
    The Empowered Business "Someone needs to be the enterprise parent that asks the question, “do you really need that?” It may be a shiny new thing, but does it make a difference in the ability to accomplish the strategy and goals?" - Enterprise Architect Todd Biske (tags: enterprisearchitecture) Knowledge Workers in the British Raj "While we’ve used technology to change business, business has also evolved to the point that it’s changing how we think about and use technology." - Peter Evans Greenwood (tags: enterprisearchitecture enterprise2.0) Arun Gupta, Miles to go ...: OTN Developer Day Boston 2011 - Slides & Trip Report Arun Gupta shares slides from his Developer Day presentations. (tags: oracle otn java) Use WLST to Delete All JMS Messages From a Destination (James Bayer's Blog) James Bayer responds to a question. (tags: oracle otn weblogic jms) Triangle Circle Square: Apex in the Amazon Cloud Scott Wesley shares several links to resources covering Oracle Apex on an Amazon EC2 instance. (tags: oracle apex ec2 amazon cloud) William Vambenepe: Reading IBM's proposed standard for Cloud Architecture The always entertaining William Vambenepe gives IBM's proposed Cloud standards the full Ebert. (tags: oracle cloud ibm standards) Government Information Group Cloud Computing Research Study "The twin pressures of reduced budgets and the need for greater efficiency have led the federal government to strongly promote cloud computing as a solution whenever possible." (tags: cloudcomputing cloud) The Ron Batra Blog: Technology Whispers: Top 10 Reasons to go ExaData "Continuing my exploration of ExaData, I thought I'd take a minute to consolidate my thoughts into key reasons for which Oracle ExaData could be a good fit for your needs." - Oracle ACE Director Ron Batra (tags: oracle oracleace exadata) Oracle WebCenter: Composite Applications & Mash-Ups (Oracle Enterprise 2.0 Blog) "The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups." (tags: oracle webcenter enterprise2.0) Antonio Romero: Great Discussion of ETL and ELT Tooling in TDWI Linkedin Group Antonio says: "There’s a great discussion of ETL and ELT tooling going on in the official TDWI Linkedin group, under the heading 'How Sustainable is SQL for ETL?' It delves into a wide range of topics." (tags: oracle linkedin etl elt) YouTube - Bunny Inc. - Episode 1. Mr. CIO meets Mr. Executive Manager Yes, it's a commercial. But it's well done and it's funny. (tags: e20 enterprise2.0 webcenter) Markus Eisele: Both Weblogic and Glassfish are strategic products for Oracle Oracle ACE Director Markus Eisele shares selected quotes pulled from the recent TechCast Live interview with Oracle's Anil Gaur and Adam Leftik (tags: oracle java weblogic glassfish) How to become an Oracle SOA expert? (SOA Partner Community Blog) Jurgan Kress shares info and links for those interested in capitalizing on SOA. (tags: oracle soa)

    Read the article

  • When to use shared libraries for a web framework?

    - by CamelBlues
    tl;dr: I've found myself hosting a bunch of sites running on the same web framework (symfony 1.4). Would it be helpful if I moved all of the shared library code into the same directory and shared it across the sites? more I see some advantages to this: Each site takes up less disk space Library updates (an unlikely scenario) can take place across all sites I also see some disadvantages, mostly in terms of a single point of failure and the inability to have sites using different versions of the framework. My real concern, though, is performance. I hypothesize that I will see a performance increase, since the PHP code will already be cached for all sites when they call the framework. Is this a correct hypothesis?

    Read the article

  • Sharing the effect

    - by Mohammad Ahmed
    my problem is : If I load 2 models ( the same model zombie ) and give them the same effect I got the following error : for(int i =0 ; i<2 ; i++) { dwarfModel[i].model = Content.Load<Model>("Models//dwarf//dwarfmodel"); dwarfModel[i].effect = Content.Load<Effect>("Models//dwarf//skinFX"); dwarfModel[i].setEffect(camera , game); dwarfModel[i].setModelAnimationStatus(game); dwarfModel[i].intializeChrachterController(new Vector3(0, 0, 0), 20, 10, 2000, 2000, 80, 40); space.Add(dwarfModel[i].chrachterController); dwarfModels.Add(dwarfModel); } enter code here

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >