Search Results

Search found 8328 results on 334 pages for 'dour high arch'.

Page 74/334 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • links for 2011-03-04

    - by Bob Rhubart
    Joao Oliveira: Forms and Reports 11g Fusion Startup Script "After Fusion Middleware 11g Linux installation (Weblogic, Forms, Reports, Discoverer and Portal or others) probably most of the newcomers will wonder how to create a startup script to start the Weblogic managed Servers when the server starts up or reboots." (tags: oracle fusionmiddleware weblogic) Anthony Shorten: SOA Suite Integration: Part 3: Loading files Anthony says: "One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process." (tags: oracle otn soa soasuite) Francisco Munoz Alvarez: Playing with Oracle 11gR2, OEL 5.6 and VirtualBox 4.0.2 (1st Part) Oracle ACE Francisco Munoz Alvarez kicks off a tutorial on creating an Oracle Database 11gR2 instance using Oracle VirtualBox and OEL. (tags: oracle database virtualbox virtualization) ORACLENERD: VirtualBox and Shared Folders Oracle ACE Chet Justice shares some tips. (tags: oracle otn oracleace virtualization virtualbox) Chris Muir: Check out the ADF content at this year's ODTUG KScope11 conference Oracle ACE Director Chris Muir shares information on this year's ODTUD Kaleidoscope event in Long Beach, CA, June 26-30. (tags: oracle otn oracleace odtug adf) Edwin Biemond: Setting a virtual IP on a specific Network interface with WebLogic 10.3.4 PS3 Edwin says: "If you want High Availability in WebLogic you need to enable the WebLogic server migration, configure the nodemanager, use a virtual / floating IP in your managed servers and channels." (tags: oracle otn oracleace highavailability weblogic virtualization) Markus Eisele: High Performance JPA with GlassFish and Coherence - Part 3 Markus says: "In this third part of my four part series I'll explain strategy number two of using Coherence with EclipseLink and GlassFish. This is all about using Coherence as Second Level Cache (L2) with EclipseLink." (tags: oracle otn oracleace glassfish coherence) Michel Schildmeijer: Set Oracle ESB montoring with Enterprise Manager Grid Control "Monitoring your Oracle SOA Suite environment...can be very complicated, but if you are using Grid Control, Oracle provides you the SOA Management Pack. Unfortunately this SOA Management Pack has pretty detailed OOTB info about BPEL, but for ESB you won’t find any OOTB metrics." (tags: oracle otn soa grid servicebus)

    Read the article

  • Munq is for web, Unity is for Enterprise

    - by oazabir
    The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why: I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while. Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!   There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • Monster's AI in an Action-RPG

    - by Andrea Tucci
    I'm developing an action rpg with some University colleagues. We've gotton to the monsters' AI design and we would like to implement a sort of "utility-based AI" so we have a "thinker" that assigns a numeric value on all the monster's decisions and we choose the highest (or the most appropriate, depending on monster's iq) and assign it in the monster's collection of decisions (like a goal-driven design pattern) . One solution we found is to write a mathematical formula for each decision, with all the important parameters for evaluation (so for a spell-decision we might have mp,distance from player, player's hp etc). This formula also has coefficients representing some of monster's behaviour (in this way we can alterate formulas by changing coefficients). I've also read how "fuzzy logic" works; I was fascinated by it and by the many ways of expansion it has. I was wondering how we could use this technique to give our AI more semplicity, as in create evaluations with fuzzy rules such as IF player_far AND mp_high AND hp_high THEN very_Desiderable (for a spell having an high casting-time and consume high mp) and then 'defuzz' it. In this way it's also simple to create a monster behaviour by creating ad-hoc rules for every monster's IQ category. But is it correct using fuzzy logic in a game with many parameters like an rpg? Is there a way of merging these two techniques? Are there better AI design techniques for evaluating monster's chooses?

    Read the article

  • ArchBeat Link-o-Rama for 11/16/2011

    - by Bob Rhubart
    Size, Failure, and Optimization | Roger Sessions The slide deck from Roger Sessions' keynote address at the 2nd IT Architect Regional Conference in Bogota, Colombia. Webcast: Oracle Business Intelligence Mobile Event Date: Tuesday, November 29, 2011 Time: 9 a.m. PT/12 noon ET Featuring Manan Goel (Director BI Product Marketing, Oracle) and Shailesh Shedge (Director BI & Analytics Practice, Ascentt). Live Webinar: Solutions for MySQL High Availability (November 29) Tune into this webcast to learn how MySQL’s High Availability solution can help you minimize downtime and ensure business continuity. Domain-Driven Design: Useful Models for Complex Problems | @ericevans0 Domain-Driven Design: Useful Models for Complex Problems | Eric Evans Eric Evans' slide deck from the recent IASA event in Spain. Oracle Hardware goes social Introducing the Oracle Hardware Social Media Hub -- The new Facebook meeting place for the global hardware community. The hub now features a pioneering Q&A app called Oracle Ask the Expert, where you can ask questions and engage with Oracle experts. Review: WebLogic Server 11g Administration Handbook by S. Alapati Dr. Frank Munz, author of "Middleware and Cloud Computing, reviews the new WebLogic book by Sam Alapati and offers a quick overview of a couple of other new titles. SOA All the Time; Architects in AZ; Clearing Info Integration hurdles This week on the Architect Home Page on OTN.

    Read the article

  • MySQL Cluster 7.3 - Join This Week's Webinar to Learn What's New

    - by Mat Keep
    The first Development Milestone and Early Access releases of MySQL Cluster 7.3 were announced just several weeks ago. To provide more detail and demonstrate the new features, Andrew Morgan and I will be hosting a live webinar this coming Thursday 25th October at 0900 Pacific Time / 16.00 UTC Even if you can't make the live webinar, it is still worth registering for the event as you will receive a notification when the replay will be available, to view on-demand at your convenience In the webinar, we will discuss the enhancements being previewed as part of MySQL Cluster 7.3, including: - Foreign Key Constraints: Yes, we've looked into the future and decided Foreign Keys are it ;-) You can read more about the implementation of Foreign Keys in MySQL Cluster 7.3 here - Node.js NoSQL API: Allowing web, mobile and cloud services to query and receive results sets from MySQL Cluster, natively in JavaScript, enables developers to seamlessly couple high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. You can study the Node.js / MySQL Cluster tutorial here - Auto-Installer: This new web-based GUI makes it simple for DevOps teams to quickly configure and provision highly optimized MySQL Cluster deployments on-premise or in the cloud You can view a YouTube tutorial on the MySQL Cluster Auto-Installer here  So we have a lot to cover in our 45 minute session. It will be time well spent if you want to know more about the future direction of MySQL Cluster and how it can help you innovate faster, with greater simplicity. Registration is open 

    Read the article

  • What about introduction to programming with C# via LINQPad?

    - by Gulshan
    From different questions/answers/articles in this and some other sites, I got the idea that the introductory language for programming should be- High level Less verbose C# is one of the heavily used high level languages being used these days. It's also multi-paradigm and descendant of C, the lingua-franca of all programming languages. So, I think it has the potential to be the introductory programming language. But I felt it's a bit verbose for the novice learners. Then LINQPad came into my mind. With LINQPad, someone can start with C# without it's verbosity. Because you can just run one statement or few statements or a standalone function with LINQPad. Again you can run a full source file also. Another thing it provide is- using SQL. So, it can be used for learning SQL too. And not to mention, it's free. So, what you guys think about the idea of introducing programming with C# via LINQPad? Any thing to watch out? Any suggestion?

    Read the article

  • After update, flash plugin playing video too fast or too slow

    - by John H
    Last night I did an update and reboot. After that, I couldn't reliably play any flash videos. They would either go too fast or stutter (as if they were buffering every 2 seconds). This occurs in both Firefox and Chrome, however I'll troubleshoot in Chrome because it's easier to enable/disable plugins at will. With PPAPI enabled (and npapi disabled), flash videos play at 1.5x speeds and audio is scrambled. With NPAPI enabled (and ppapi disabled), flash videos stutter and skip, despite showing a decent buffer. From one old thread, I went into pavucontrol and tried disabling the high def audio controller. I also tried disabling Totem plugin to no affect. Version other details: Linux freshdesk 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/*-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS" Shockwave Flash 11.3 r31 /opt/google/chrome/PepperFlash/libpepflashplayer.so 11.3.31.331 PPAPI (out-of-process) Shockwave Flash Version: 11.2 r202 Location: /usr/lib/adobe-flashplugin/libflashplayer.so Type: NPAP 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 8400 GS] (rev a2) 01:00.1 Audio device: NVIDIA Corporation High Definition Audio Controller (rev a1)

    Read the article

  • Computer Science Degrees and Real-World Experience

    - by Steven Elliott Jr
    Recently, at a family reunion-type event I was asked by a high school student how important it is to get a computer science degree in order to get a job as a programmer in lieu of actual programming experience. The kid has been working with Python and the Blender project as he's into making games and the like; it sounds like he has some decent programming chops. Now, as someone that has gone through a computer science degree my initial response to this question is to say, "You absolutely MUST get a computer science degree in order to get a job as a programmer!" However, as I thought about this I was unsure as to whether my initial reaction was due in part to my own suffering as a CS student or because I feel that this is actually the case. Now, for me, I can say that I rarely use anything that I learned in college, in terms of the extremely hard math, algorithms, etc, etc. but I did come away with a decent attitude and the willingness to work through tough problems. I just don't know what to tell this kid; I feel like I should tell him to do the CS degree but I have hired so many programmers that majored in things like English, Philosophy, and other liberal arts-type degrees, even some that never went to college. In fact my best developer, falls into this latter category. He got started writing software for his church or something and then it took off into a passion. So, while I know this is one of those juicy potential down vote questions, I am just curious as to what everyone else thinks about this topic. Would you tell a high school kid about this? Perhaps if he/she already knows a good deal of programming and loves it he doesn't need a CS degree and could expand his horizons with a liberal arts degree. I know one of the creators of the Django web framework was a American Literature major and he is obviously a pretty gifted developer. Anyway, thanks for the consideration.

    Read the article

  • Infragistics - Now Available! 2 New Packs and Silverlight CTPs

    Infragistics® glows silver this season as we continue to innovate for the Silverlight 3 platform and deliver stockings stuffed with high performance controls needed to quickly and easily create great user experiences in Silverlight; and two new ICON packs guaranteed to make your applications shine. First Silverlight Pivot Grid Now Available Perfect for working with multi-dimensional data, the xamWebPivotGrid™ presents decision makers with highly-interactive pivoting views of business intelligence. Our new high-performance Silverlight charting control, the xamWebDataChart™ enables blazing fast updates every few milliseconds to charts with millions of data points. Both of these controls are planned for the 2010 Volume 1 release of NetAdvantage for Silverlight Data Visualization. Gift to Silverlight Line of Business Applications You’ll be able to deliver superior user experiences in LOB applications with Silverlight RIA services support, a ZIP compression library, a new control persistence framework, and new Silverlight data grid features like unbound columns and template layouts, plus an Office 2007-style ribbon UI for Silverlight. Tis the Season to Add Some Shine The NetAdvantage ICONS Legal Pack adds a touch of legalese to any application user interface with its rich, legal system-themed graphic icons. The NetAdvantage ICONS Education Pack supplies familiar, academic icons that developers can easily add to software reaching students, educators, schools and universities. Sold in themes Packs, ICON packs that are already available are: Web & Commerce, Healthcare, Office Basics, Business & Finance and Software & Computing. Buy any two of the seven packs for $299 USD (MSRP); 3 packs for $399 USD (MSRP); or sold separately for $199 USD (MSRP) each. For more Product details Contact Infragistics:      +1 (800) 231-8588 In Europe (English Speakers):      +44 (0) 20 8387 1474 En France (en langue française):      +33 (0) 800 667 307 Für Deutschland (Deutscher Sprecher):       0800 368 6381 In India:     +91 (80) 6785 1111 span.fullpost {display:none;}

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • Why does SEO based code tips not appear to affect ranking?

    - by Ben
    I've been researching various methods for SEO where pages have precise titles, keywords are highlighted with h tags and tick the many boxes stated in good page mark up for SEO. However when looking at some top ranked search sites on google for key terms they have terrible SEO based mark up. Really long page titles, no tags, limited appearance of keywords in the text and so on. SEO analysis services rate them lower than other sites, yet these sites rank really high. Even with a low number of back-links they are high, so I don't understand how these sites earn the position when they appear inferior to those below them which have better mark up and links. I don't want to cause trouble my mentioning sites or keywords etc. but looking in google at 'executive search' the roughly 5th placed site makes no sense why it should be highly rank, especially with all the added .swfs. The same applies for the top of 'Japan Executive Search'. My main point is that these sites seem to not have all the important structural rules stated in seo page rating applications and general suggested best practice, nor do they show large back-links. It makes me feel like there is no point bothering to write decent mark up if it really doesn't matter. Can anyone explain how sites with such mark-up, and low back-links can outrank well written and structured sites with greater linkage? Sorry if this is a fuzzy question, I want to avoid singling out any sites for example, but it really has me perplexed that sites which appear to ignore the suggested best practices rank so well.

    Read the article

  • Tömörítés becslése - Compression Advisor

    - by lsarecz
    Az Oracle Database 11g verziójától már OLTP adatbázisok is hatékonyan tömöríthetok az Advanced Compression funkcióval. Nem csak a tárolandó adatok mennyisége csökken ezáltal felére, vagy akár negyedére, de az adatbázis teljesítménye is javulhat, amennyiben I/O korlátos a rendszer (és általában az). Hogy pontosan mekkora tömörítés várható az Advanced Compression bevezetésével, az kiválóan becsülheto a Compression Advisor eszközzel. Ez nem csak az OLTP tömörítés mértékét, de 11gR2 verziótól kezdve a HCC tömörítés arányát is becsülni tudja, amely Exadata Database Machine, Pillar Axiom illetve ZFS Storage alkalmazásával érheto el. A HCC tömörítés becsléséhez csak 11gR2 adatbázisra van szükség, nem kell hozzá a speciális célhardver (Exadata, Pillar, ZFS). A Compression Advisor valójában a DBMS_COMPRESSION package használatával érheto el. A package-hez tartozik 6 konstans, amellyel a kívánt tömörítési szintek választhatók ki: Constant Type Value Description COMP_NOCOMPRESS NUMBER 1 No compression COMP_FOR_OLTP NUMBER 2 OLTP compression COMP_FOR_QUERY_HIGH NUMBER 4 High compression level for query operations COMP_FOR_QUERY_LOW NUMBER 8 Low compression level for query operations COMP_FOR_ARCHIVE_HIGH NUMBER 16 High compression level for archive operations COMP_FOR_ARCHIVE_LOW NUMBER 32 Low compression level for archive operations A GET_COMPRESSION_RATIO tárolt eljárás elemzi a tömöríteni kívánt táblát. Mindig csak egy táblát, vagy opcionálisan annak egy partícióját tudja elemezni úgy, hogy a tábláról készít egy másolatot egy külön erre a célra kijelölt/létrehozott táblatérre. Amennyiben az elemzést egyszerre több tömörítési szintre futtatjuk, úgy a tábláról annyi másolatot készít. A jó közelítésu becslés (+-5%) feltétele, hogy táblánként/partíciónként minimum 1 millió sor legyen. 11gR1 esetében még a DBMS_COMP_ADVISOR csomag GET_RATIO eljárása volt használatos, de ez még nem támogatta a HCC becslést. Érdemes még megnézni és kipróbálni a Tyler Muth blogjában publikált formázó eszközt, amivel a compression advisor kimenete alakítható jól értelmezheto formátumúvá. Végül összegezném mit is tartalmaz az Advanced Compression opció, mivel gyakran nem világos a felhasználóknak miért kell fizetni: Data Guard Network Compression Data Pump Compression (COMPRESSION=METADATA_ONLY does not require the Advanced Compression option) Multiple RMAN Compression Levels (RMAN DEFAULT COMPRESS does not require the Advanced Compression option) OLTP Table Compression SecureFiles Compression and Deduplication Ez alapján RMAN esetében például a default compression (BZIP2) szint ingyen használható, viszont az új ZLIB Advanced Compression opciót igényel. A ZLIB hatékonyabban használja a CPU-t, azaz jóval gyorsabb, viszont kisebb tömörítési arány érheto el vele.

    Read the article

  • Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

    - by Giri Mandalika
    Oracle Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com. The Oracle Optimized Solution for Oracle E-Business Suite This solution uses the SPARC SuperCluster T4-4, Oracle’s first multi-purpose engineered system.  Download the free business and technical white papers which provide significant relevant information and resources.  What is an Optimized Solution? Oracle Optimized Solutions are fully documented architectures that have been thoroughly tested, tuned and optimized for performance and availability across the entire stack on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations. Oracle E-Business Suite R12 and Oracle Database 11g Multiple Oracle E-Business Suite  application modules were tested in this Oracle Optimized Solution -- Financials (online - Oracle Forms & Web requests), Order Management (online - Oracle Forms & Web requests) and HRMS (online - Web requests & payroll batch). Oracle Solaris Cluster and Oracle Real Application Cluster deliver the the high availability on this solution.  To understand the behavior of the architecture under peak load conditions, determine optimum utilization, verify the scalability of the solution and exercise high availability features, Oracle engineers tested the Oracle E-Business Suite and Oracle Database all running on a SPARC SuperCluster T4-4 engineered system. The test results are documented in the Oracle Optimized Solution white papers to provide general guidance for real world deployments.  Questions & Requests For more information, visit Oracle Optimized Solution for Oracle E-Business Suite page. If you are at a point where you would like to actually test a specific Oracle E-Business Suite application module on SPARC T4 systems or an engineered system such as SPARC SuperCluster, please contact Oracle Solution Center.

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

  • XNA Masking Mayhem

    - by TropicalFlesh
    I'd like to start by mentioning that I'm just an amateur programmer of the past 2 years with no formal training and know very little about maximizing the potential of graphics hardware. I can write shaders and manipulate a multi-layered drawing environment, but I've basically stuck to minimalist pixel shaders. I'm working on putting dynamic point light shadows in my 2d sidescroller, and have had it working to a reasonable degree. Just chucking it in without working on serious optimizations outside of basic culling, I can get 50 lights or so onscreen at once and still hover around 100 fps. The only issue is that I'm on a very high end machine and would like to target the game at as many platforms I can, low and high end. The way I'm doing shadows involves a lot of masking before I can finally draw the light to my light layer. Basically, my technique to achieveing such shadows is as follows. See pics in this album http://imgur.com/a/m2fWw#0 The dark gray represents the background tiles, the light gray represents the foreground tiles, and the yellow represents the shadow-emitting foreground tile. I'll draw the light using a radial gradient and a color of choice I'll then exclude light from the mask by drawing some geometry extending through the tile from my point light. I actually don't mask the light yet at this point, but I'm just illustrating the technique in this image Finally, I'll re-include the foreground layer in my mask, as I only want shadows to collect on the background layer and finally multiply the light with it's mask to the light layer My question is simple - How can I go about reducing the amount of render target switches I need to do to achieve the following: a. Draw mask to exclude shadows from the foreground to it's own target once per frame b. For each light that emits shadows, -Begin light mask as full white -Render shadow geometry as transparent with an opaque blendmode to eliminate shadowed areas from the mask -Render foreground mask back over the light mask to reintroduce light to the foreground c. Multiply light texture with it's individual mask to the main light layer.

    Read the article

  • Dummy output after upgrade from 12.04 to 12.10, even though sound card is detected

    - by user115441
    So I just recently upgraded my system from Ubuntu 12.04 to 12.10. However, when I booted into 12.10 for the first time, no sound comes out of my speakers. I checked the sound settings and the Dummy Output was the only thing showing up. I used "hwinfo --sound" to check to see if my sound card was actually installed, and it was installed. hwinfo --sound hal.1: read hal dataprocess 2687: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 282. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 11: PCI 1b.0: 0403 Audio device [Created at pci.318] Unique ID: u1Nb._aiKlM91Nt0 SysFS ID: /devices/pci0000:00/0000:00:1b.0 SysFS BusID: 0000:00:1b.0 Hardware Class: sound Model: "Intel 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller" Vendor: pci 0x8086 "Intel Corporation" Device: pci 0x2668 "82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller" SubVendor: pci 0x107b "Gateway 2000" SubDevice: pci 0x4040 Revision: 0x04 Driver: "snd_hda_intel" Driver Modules: "snd_hda_intel" Memory Range: 0x50240000-0x50243fff (rw,non-prefetchable) IRQ: 44 (91 events) Module Alias: "pci:v00008086d00002668sv0000107Bsd00004040bc04sc03i00" Driver Info #0: Driver Status: snd_hda_intel is active Driver Activation Cmd: "modprobe snd_hda_intel" Config Status: cfg=new, avail=yes, need=no, active=unknown I'm not sure what to do here. The only time the sound will actually work is when I boot into my Windows partition and then reboot into Ubuntu. I mean I don't want to have to do that every time I want to use Ubuntu. I would really appreciate any help I can get on here.

    Read the article

  • Cannot mount USB 3 harddrive

    - by Thijs
    I cannot mount an USB 3 harddrive. When looking at dmesg I got the following error: [ 96.463269] usb 3-2.1: >new low-speed USB device number 10 using xhci_hcd [ 96.485777] usb 3-2.1: >New USB device found, idVendor=046d, idProduct=c025 [ 96.485787] usb 3-2.1: >New USB device strings: Mfr=1, Product=2, SerialNumber=0 [ 96.485792] usb 3-2.1: >Product: USB-PS/2 Optical Mouse [ 96.485797] usb 3-2.1: >Manufacturer: B16_b_02 [ 96.486118] usb 3-2.1: >ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes [ 96.490149] input: B16_b_02 USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2.1/3-2.1:1.0/input/input12 [ 96.490500] hid-generic 0003:046D:C025.0003: >input,hidraw0: USB HID v1.10 Mouse [B16_b_02 USB-PS/2 Optical Mouse] on usb-0000:00:14.0-2.1/input0 [ 114.088984] usb 3-2.3: >new high-speed USB device number 11 using xhci_hcd [ 114.105041] usb 3-2.3: >New USB device found, idVendor=13fd, idProduct=1618 [ 114.105051] usb 3-2.3: >New USB device strings: Mfr=0, Product=0, SerialNumber=0 [ 257.531777] usb 3-2.3: >USB disconnect, device number 11 [ 258.513912] usb 3-2.4: >new high-speed USB device number 12 using xhci_hcd [ 258.514046] usb 3-2.4: >Device not responding to set address. [ 258.717649] usb 3-2.4: >Device not responding to set address. [ 258.921203] usb 3-2.4: >device not accepting address 12, error -71 [ 258.937388] hub 3-2:1.0: >unable to enumerate USB device on port 4 I have tried to mount the drive on ubuntu-12.04 and it mounts just fine.

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

  • MySQL HA Events in the UK, Germany & France

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Times"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p { margin: 0cm 0cm 0.0001pt; font-size: 10pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Oracle is running MySQL High Availability breakfast seminars in London, Düsseldorf and Paris. During these free seminars, we will review the various options and technologies at your disposal to implement highly available & highly scalable MySQL infrastructures, as well as best practices in terms of architectures. The packed agenda will include High Availability features in MySQL 5.5, MySQL Replication, MySQL Cluster, the newly released Oracle VM Template for MySQL Enterprise Edition as well as what's new in MySQL 5.6 and MySQL Cluster 7.2, including NoSQL access methods to MySQL. There are a few places left for the seminar in London taking place tomorrow, June 29th, Register Now! Learn more and register for the event in Düsseldorf, July 13th. Learn more and register for the event in Paris, September 7th. We hope to see you there!

    Read the article

  • Oracle Solaris at OpenWorld Tokyo 2012

    - by Markus Weber
    Oracle OpenWorld Tokyo will open its doors on Wednesday, April 4 2012, until Friday, April 6 2012, in Roppongi.I've you been in Tokyo as a Gaijin, or foreigner, you know exactly where that it. Many of Oracle's top executives will be there, including Larry Ellison, Mark Hurd, and John Fowler. The keynotes that they are covering will be very interesting, for sure. Now, whether you will actually be there, or not, you might still find it interesting that several great Solaris-related sessions will be held there, especially as part of the "Oracle Develop" track, such as: "Oracle Solaris 11 - Developers Need To Know" "How to build high performance and high security Oracle Database environment with Oracle SPARC/Solaris" "Oracle Solaris Tuning Contest" "IT Assets preservation and constructive migration with Oracle Solaris virtualization" And of course John Fowler's keynote "Server and Storage Systems Strategy".The complete schedule in English can be found here. We hope you can make it. If not, there will always be the San Francisco one.

    Read the article

  • Google Loon–A network of balloons to provide internet to everyone

    - by Gopinath
    Google once just a super powerful search engine provider and now they are venturing in to a lot of interesting non software projects like self driving cars, glasses that beam information right on to your eye balls, high speed internet services @ 1 Giga bytes per second. A recent addition to this innovative list is Google Loon – a network of flying balloons that provide internet access to remote parts of the world where it is not feasible for many governments/corporate to provider internet services. Google says there are several billions of people around the world who don’t have access to internet and Google Loon aim is to provide internet facilities to all these people. A pilot project is started couple of days ago by launching 30 balloons into stratosphere from New Zealand. These balloons fly 20 Kilometers above earth(much higher than where aero planes fly) and they beam internet to homes having Loon receiver wirelessly. Checkout the embedded introductory video on Google Loon What is in it for Google? Why is Google getting into these type of projects and what is in it for them? Google is the gateway to web and majority of people find information on web using Google Services/Software. So providing internet facilities to more people means, more people using Google services and it in turn contributes to their revenue growth. Google is not a charity, they do all these projects to earn money just like every other corporate. The best part is while earning money they are touching lives of billions of people in a positive way. Just imagine everyone in the world connected and have ability to take informed decisions irrespective of whether they live in developed countries or underprivileged parts of the world! Wow that will be a beautiful day. Further reading Google Loon website Google unveils its Project Loon Wi-Fi balloons – in pictures Google flies Internet balloons in stratosphere for a “network in the sky” How Google Will Use High-Flying Balloons to Deliver Internet to the Hinterlands Good discussion on Google Loon at Hacker News community

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Big Data Appliance X4-2 Release Announcement

    - by Jean-Pierre Dijcks
    Today we are announcing the release of the 3rd generation Big Data Appliance. Read the Press Release here. Software Focus The focus for this 3rd generation of Big Data Appliance is: Comprehensive and Open - Big Data Appliance now includes all Cloudera Software, including Back-up and Disaster Recovery (BDR), Search, Impala, Navigator as well as the previously included components (like CDH, HBase and Cloudera Manager) and Oracle NoSQL Database (CE or EE). Lower TCO then DIY Hadoop Systems Simplified Operations while providing an open platform for the organization Comprehensive security including the new Audit Vault and Database Firewall software, Apache Sentry and Kerberos configured out-of-the-box Hardware Update A good place to start is to quickly review the hardware differences (no price changes!). On a per node basis the following is a comparison between old and new (X3-2) hardware: Big Data Appliance X3-2 Big Data Appliance X4-2 CPU 2 x 8-Core Intel® Xeon® E5-2660 (2.2 GHz) 2 x 8-Core Intel® Xeon® E5-2650 V2 (2.6 GHz) Memory 64GB 64GB Disk 12 x 3TB High Capacity SAS 12 x 4TB High Capacity SAS InfiniBand 40Gb/sec 40Gb/sec Ethernet 10Gb/sec 10Gb/sec For all the details on the environmentals and other useful information, review the data sheet for Big Data Appliance X4-2. The larger disks give BDA X4-2 33% more capacity over the previous generation while adding faster CPUs. Memory for BDA is expandable to 512 GB per node and can be done on a per-node basis, for example for NameNodes or for HBase region servers, or for NoSQL Database nodes. Software Details More details in terms of software and the current versions (note BDA follows a three monthly update cycle for Cloudera and other software): Big Data Appliance 2.2 Software Stack Big Data Appliance 2.3 Software Stack Linux Oracle Linux 5.8 with UEK 1 Oracle Linux 6.4 with UEK 2 JDK JDK 6 JDK 7 Cloudera CDH CDH 4.3 CDH 4.4 Cloudera Manager CM 4.6 CM 4.7 And like we said at the beginning it is important to understand that all other Cloudera components are now included in the price of Oracle Big Data Appliance. They are fully supported by Oracle and available for all BDA customers. For more information: Big Data Appliance Data Sheet Big Data Connectors Data Sheet Oracle NoSQL Database Data Sheet (CE | EE) Oracle Advanced Analytics Data Sheet

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >