Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 437/597 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • For a large website developed in PHP, is it necessary to have a framework?

    - by Martin
    I am wondering if it is necessary to have a framework or if it is a must-have if I plan to make a large website. Large website could mean a lot of things: in other words, multiple dynamic web pages (40-50 dynamic pages, mysql content) and a lot of visitors (+- a million hits per month). The site will be hosted in a dedicated server environment. I know that it could simplify coding for a developer team, that it includes libraries and a lot of advantages. But I just feel that I don't need that. I think that learning how it works, managing it and installing it would take more time and I could use that time to code. I write PHP the simplest way I could (with performance in mind) and I try to reuse my code/functions/classes most of the time and I make sure that if another developer joins the team, that he won't be lost in the code. I am also planning to use MemCached or another Cache for PHP. As I said, the site will be hosted in a dedicated server environment but will be entirely managed by the hosting company. I am pretty sure the control panel for me to control the basic stuff will be Cpanel. For a developer like me that only knows PHP, Javascript, HTML, CSS, MYSQL and really basic server management, I feel that it seems to complicated to have a framework. Am I wrong? Is it worth the time to learn all about it? Thank you for your opinions and suggestions.

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • Turn O&M Operations into Optimized Projects with Oracle Primavera

    - by mark.kromer
    Oracle enterprise project portfolio management with Primavera is much more than optimizing project performance and eliminating project failure on new projects, capital programs, etc. A very common use case that we see is small-scale frequent and recurring projects based on on-going operations and maintenance. As opposed to assigning resources to various activities when you are building a new network infrastructure, for example, Oracle has teamed-up the Primavera and eBusiness Suite teams to provide direct integration for work orders from Oracle's Enterprise Asset Management (eAM) system to populate into Primavera P6 project schedules. So now that your network infrastructure build-out project is complete, planners and operations managers can use the world-class what-if and scheduling capabilities in Primavera tools to assign work orders, maximize resource utilization and to reuse templates for typical O&M operations in Primavera and share that back to the operations teams using eAM for maintenance. Also, large-scale maintenance operations related to large assets in the asset lifecycle will include phase-outs, shutdowns and turn-arounds which are classic maintenance projects, as opposed to building something new, that Oracle Primavera with Oracle e-Business Suite provides full coverage to optimize your ALM processes in your business. Read more about these new capabilities from Oracle in the ERP space from the Oracle eAM data sheet.

    Read the article

  • Run a VirtualBox VM in a second X-Server with Graphic support

    - by Scindix
    I'm starting a VirtualBox VM (Windows 7) in a second X-Server (Ubuntu 14.04) and i'm using the following xinit script (/path/.vboximage): optirun VBoxManage startvm <vm name> & exec tinywm I recognized that while running Virtualbox normally under Gnome (Unity to be precise ;-) ) I get full graphics support. But when I run it on a second xserver there seem to be some problems. E.g. Windows Aero doesn't seem to work and Chrome WebGL demos are running with poorer performance. I'm not a big Windows expert, so I don't know how I could check the used Graphic card (specification). But it is very obvious that something has changed when running the vm in the extra X-Server. Also when I try to replace tinywm with compiz I get the unity frame around the VM, which also seems to have no graphics acceleration (no transparency effects) So it seems that the X-Server doesn't have Graphic acceleration at all. I have a NVidia 525m and an Intel HD3000 which are both capable of advanced graphics. I'm starting the above script with startx /path/.vboximage - :1 How could I fix that?

    Read the article

  • Welcome!

    - by mannamal
    Welcome to the Oracle Big Data Connectors blog, which will focus on posts related to integrating data on a Hadoop cluster with Oracle Database. In particular the blog will focus on best practices, usage notes, and performance tips for using Oracle Loader for Hadoop and Oracle Direct Connector for HDFS, which are part of Oracle Big Data Connectors. Oracle Big Data Connectors 1.0 also includes Oracle R Connector for Hadoop and Oracle Data Integrator Application Adapters for Hadoop. Oracle Loader for Hadoop: Oracle Loader for Hadoop loads data from Hadoop to Oracle Database. It runs as a MapReduce job on Hadoop to partition, sort, and convert the data into an Oracle-ready format, offloading to Hadoop the processing that is typically done using database CPUs. The data is thenloaded to the database by the Oracle Loader for Hadoop job (online load) or written out as Oracle Data Pump files for load and access later (offline load) with Oracle Direct Connector for HDFS. Oracle Direct Connector for HDFS: Oracle Direct Connector for HDFS is a connector for high speed access of data on HDFS from Oracle Database. With this connector Oracle SQL can be used to directly query data on HDFS. The data can be Oracle Data Pump files generated by Oracle Loader for Hadoop or delimited text files. The connector can also be used to load data into the database using SQL.

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • implementing dynamic query handler on historical data

    - by user2390183
    EDIT : Refined question to focus on the core issue Context: I have historical data about property (house) sales collected from various sources in a centralized/cloud data source (assume info collection is handled by a third party) Planning to develop an application to query and retrieve data from this centralized data source Example Queries: Simple : for given XYZ post code, what is average house price for 3 bed room house? Complex: What is estimated price for an house at "DD,Some Street,XYZ Post Code" (worked out from average values of historic data filtered by various characteristics of the house: house post code, no of bed rooms, total area, and other deeper insights like house building type, year of built, features)? In addition to average price, the application should support other property info ** maximum, or minimum price..etc and trend (graph) on a selected property attribute over a period of time**. Hence, the queries should not enforce the search based on a primary key or few fixed fields In other words, queries can be What is the change in 3 Bed Room house price (irrespective of location) over last 30 days? What kind of properties we can get for X price (irrespective of location or house type) The challenge I have is identifying the domain (BI/ Data Analytical or DB Design or DB Query Interface or DW related or something else) this problem (dynamic query on historic data) belong to, so that I can do further exploration My findings so far I could be wrong on the following, so please correct me if you think so I briefly read about BI/Data Analytics - I think it is heavy weight solution for my problem and has scalability issues. DB Design - As I understand RDBMS works well if you know Data model at design time. I am expecting attributes about property or other entity (user) that am going to bring in, would evolve quickly. hence maintenance would be an issue. As I am going to have multiple users executing query at same time, performance would be a bottleneck Other options like Graph DB (http://www.tinkerpop.com/) seems to be bit complex (they are good. but using those tools meant for generic purpose, make me think like assembly programming to solve my problem ) BigData related solution are to analyse data from multiple unrelated domains So, Any suggestion on the space this problem fit in ? (Especially if you have design/implementation experience of back-end for property listing or similar portals)

    Read the article

  • StreamInsight V2.0 Released!

    - by Roman Schindlauer
    The StreamInsight Team is proud to announce the release of StreamInsight V2.0! This is the version that ships with SQL 2012, and as such it has been available through Connect to SQL CTP customers already since December. As part of the SQL 2012 launch activities, we are now making V2.0 available to everyone, following our tradition of providing a separate download page. StreamInsight V2.0 includes a number of stability and performance fixes over its predecessor V1.2. Moreover it introduces a dependency on the .NET Framework 4.0, as well as on SQL 2012 license keys. For these reasons, we decided to bump the major version number, even though V2.0 does not add new features or API surface. It can be regarded a stepping stone to the upcoming release 2.1 which will contain significantly new APIs (that will depend on .NET 4.0). Head over here to download StreamInsight V2.0. The updated Books Online can be found here. Update: For instructions on how to make your existing application work against the new bits without recompilation, see here. Regards, The StreamInsight Team

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Randomly and uniquely iterating over a range

    - by Synetech
    Say you have a range of values (or anything else) and you want to iterate over the range and stop at some indeterminate point. Because the stopping value could be anywhere in the range, iterating sequentially is no good because it causes the early values to be accessed more often than later values (which is bad for things that wear out), and also because it reduces performance since it must traverse extra values. Randomly iterating is better because it will (on average) increase the hit-rate so that fewer values have to be accessed before finding the right one, and also distribute the accesses more evenly (again, on average). The problem is that the standard method of randomly jumping around will result in values being accessed multiple times, and has no automatic way of determining when each value has been checked and thus the whole range has been exhausted. One simplified and contrived solution could be to make a list of each value, pick one at random, then remove it. Each time through the loop, you pick one fromt he set of remaining items. Unfortunately this only works for small lists. As a (forced) example, say you are creating a game where the program tries to guess what number you picked and shows how many guess it took. The range is between 0-255 and instead of asking Is it 0? Is it 1? Is it 2?…, you have it guess randomly. You could create a list of 255 numbers, pick randomly and remove it. But what if the range was between 0-232? You can’t really create a 4-billion item list. I’ve seen a couple of implementations RNGs that are supposed to provide a uniform distribution, but none that area also supposed to be unique, i.e., no repeated values. So is there a practical way to randomly, and uniquely iterate over a range?

    Read the article

  • Have you Been Missing the 'About This Record' Functionality on the Customer Form...?

    - by MargaretW
    Do you have fond memories of the 'Help -> About This Record'  functionality that used to be available in the old Customer form - when it was a form, and not a java html screen?  Back in Release 11i, we had the ability to identify when the customer record had last been updated and by whom.  When some forms were replaced by Java HTML screens, you could identify some of this information via the 'About this Page' hyperlink at the bottom left hand corner of the HTML page.  You could enable this by enabling the FND: Diagnostics profile option, but many customers found this had an adverse effect on performance and additionally was not user-friendly.   Our customers tell us that this feature was widely used to identify owner/update information in many business processes, including auditing, customer entry/update, research and testing.  There have been various efforts to revert this feature by customising java pages, but this was not fully successful in some cases.  Oracle Support is happy to announce that this functionality has now been included in the Customer screens in Release 12.2 onwards.   You will be able to query the record history at customer level, at site level, at site address levels and for all tabs relating to the customer. Simply click on the 'Record History' icon, available in the Record History column on a summary screen, or via the same icon on the individual detail screen to display the following information: Last Updated Date: Last Updated By Creation Date Created By Last Update Login

    Read the article

  • Oracle Day 2012

    - by Mark Hesse
    Normal.dotm 0 0 1 133 760 Sun Microsystems 6 1 933 12.0 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} As a keynote speaker at this year’s Oracle Day 2012, “Your Vision, Engineered” I had the honor and pleasure of speaking to a crowd of about 150 attendees about our recently released, fourth generation Exadata X3 In-Memory Machine in a presentation entitled “Oracle Exadata X3 - Transforming Data Management”. The general theme of the thirty-minute talk was how to improve performance, lower costs, and build the foundation for your cloud service platform using Exadata. Since its introduction in 2008, I’ve watched first-hand as Exadata has evolved from a data warehouse-only system to an OLTP and DW in-memory database machine capable of storing hundreds of terabytes of compressed user data in flash and main memory.  Many of my Exadata customers are now purchasing additional systems as they continue to standardize Oracle 11g deployments on the best database platform available.

    Read the article

  • Blogger Blog Takes Ages to Load after Custom Domain Redirection

    - by abhisek
    I recently bought a custom domain for a blogger blog (technabled.com) I have for sometime now. I followed the instructions on blogger's documentation. I added A-name records and CNAME records with my DNS provider. But, now, some strange problems are cropping up. If I connect to my broadband network and then ping technabled.com, it times out. Then, if I visit the webpage, which takes almost one and half minutes to load, and then if I ping technabled.com, it shows expected result. This is not just me. I asked some of the regular readers, who reported the same issue. As a result of this, I am losing a lot of visits. What is stranger is that the subsequent visits to the blog is faster. I have checked with a few online services to test the performance. WebPageTest seems to say the same thing: http://www.webpagetest.org/result/110117_1N_7PE/ (please see the First View / Repeat View time) Also, the pagespeed score is not that bad. So I am ruling out other possibilities. I am at a loss as to what I should do to find a solution. Help is much appreciated. :)

    Read the article

  • Oracle Solaris 11.1 available today

    - by user12611852
    Today Oracle is pleased to announce availability of Oracle Solaris 11.1. Download Solaris 11.1 Order Solaris 11.1 media kitExisting customers can quickly and simply update using the network based repository Highlights include: 8x faster database startup and shutdown and online resizing of the database SGA with a new optimized shared memory interface between the database and Oracle Solaris 11.1 Up to 20% throughput increases for Oracle Real Application Clusters by offloading lock management into the Oracle Solaris kernel Expanded support for Software Defined Networks (SDN) with Edge Virtual Bridging enhancements to maximize network resource utilization and manage bandwidth in cloud environments 4x faster Solaris Zone updates with parallel operations shorten maintenance windows New built-in memory predictor monitors application memory use and provides optimized memory page sizes and resource location to speed overall application performance. Learn more and share these valuable tools with your customers to enable them to move to Oracle Solaris 11.1 quickly. Many customers wait for the first update --now is the time to encourage them to install Oracle Solaris 11.1. Oracle Solaris 11.1 Data Sheet  What's New in Oracle Solaris 11.1 Oracle Solaris 11.1 FAQs Oracle Solaris 11 .1 Customer Presentation Oracle Solaris 11.1 is recommended for all SPARC T4 Systems and will soon be available preinstalled.

    Read the article

  • Join me at OpenWorld 2012

    - by Michael Palmeter (Exalogic PM)
    For those of you that will be coming out to Oracle OpenWorld 2012 next ween in San Francisco, I encourage you to take a few minutes on Monday afternoon to come to my session on Oracle Exalogic. Click here for more info: CON9416 - Oracle Exalogic 2.0: Ready-to-Deploy, Mission-Critical Private Cloud My session is one of the first on Oracle Exalogic (one of the privileges of running Product Management for the product) and with that in mind it is going to be something of an introduction and overview.  The material I will present is tailored for C-level customers that are interested in the product but haven't really been exposed to it in any detail.  This is essentially the same sort of presentation I give to customers that visit Oracle HQ, and it provides context for all of the other excellent sessions that follow. During this session I will talk about: The macro-trends in the industry that are driving Exalogic strategy - IT-as-a-Service and infrastructure convergence The first two years of market success with Exalogic - who's bought it, why, and what their results have been Exalogic key features and differentiation - why it's the best possible platform for Oracle business applications and middleware How Exalogic performs, and why it is the hands-down performance champion of Enterprise cloud platforms If you haven't signed up yet, please do.  I'd love to see you there.

    Read the article

  • How can I get my monitor's maximum resolution without the proprietary AMD graphic driver installed?

    - by Venki
    I am using Ubuntu 14.04. I have an AMD Radeon 5570 HD graphic card. Actually, the default open source REDWOOD drivers aren't allowing me to choose my monitor's maximum screen resolution(which is 1366 x 768). I just have two resolutions displayed which are 1024x768 and 800x600 . If I give the command : xrandr -s 1366x768 then the output is: Size 1366x768 not found in available modes So just for the sake of getting 1366x768 resolution I am forced to install the proprietary graphic driver that AMD gives me from its site. But if I install it(which itself is quite a problem-prone process), I undergo a lot of 'inconvenience'. Sometimes after an OS update, the driver crashes unity. Then I will have to uninstall that driver from a tty and google around for a solution. Also I encounter screen tearing problems occasionally. In addition I also cant see my login screen(See this question which states this particular problem). The main problem is AMD does not update its driver as quick as Ubuntu updates its OS. This is quite irritating. So, I want the maximum resolution(and performance) that my graphics card and monitor can give me without installing the 'problematic' proprietary graphic card driver that AMD gives. Is this possible? Suggestions please. Thanks in advance. PS :- More system specs details:- Intel i3 2100 processor AMD P8H61-M PLUS2 motherboard AMD Radeon 5570 HD graphic card DELL monitor (BTW, Thank you for reading through my elaborate description!)

    Read the article

  • Join Us!! Live Webinar: Using UPK for Testing

    - by Di Seghposs
    Create Manual Test Scripts 50% Faster with Oracle User Productivity Kit  Thursday, March 29, 2012 11:00 am – 12:00 pm ET Click here to register now for this informative webinar. Oracle UPK enhances the testing phase of the implementation lifecycle by reducing test plan creation time, improving accuracy, and providing the foundation for reusable training documentation, application simulations, and end-user performance support—all critical assets to support an enterprise application implementation. With Oracle UPK: Reduce manual test plan development time - Accelerate the testing cycle by significantly reducing the time required to create the test plan. Improve test plan accuracy - Capture test steps automatically using Oracle UPK and import those steps directly to any of these testing suites eliminating many of the errors that occur when writing manual tests. Create the foundation for reusable assets - Recorded simulations can be used for other lifecycle phases of the project, such as knowledge transfer for training and support. With its integration to Oracle Application Testing Suite, IBM Rational, and HP Quality Center, Oracle UPK allows you to deploy high-quality applications quickly and effectively by providing a consistent, repeatable process for gathering requirements, planning and scheduling tests, analyzing results, and managing  issues. Join this live webinar and learn how to decrease your time to deployment and enhance your testing plans today! 

    Read the article

  • "Misaligned partition" - Should I do repartition (how?)

    - by RndmUbuntuAmateur
    Tried to install Ubuntu 12.04 from USB-stick alongside the existing Win7 OS 64bit, and now I'm not sure if install was completely successful: Disk Utility tool claims that the Extended partition (which contains Ubuntu partition and Swap) is "misaligned" and recommends repartition. What should I do, and if should I do this repartition, how to do it (especially if I would like not to lose the data on Win7 partition)? Background info: A considerably new Thinkpad laptop (UEFI BIOS, if that matters). Before install there were already a "SYSTEM_DRV" partition, the main Windows partition and a Lenovo recovery partition (all NTFS). Now the table looks like this: SYSTEM_DRV (sda1), Windows (sda2), Extended (sda4) (which contains Linux (sda5; ext4) and Swap (sda6)) and Recovery (sda3). Disk Utility Tool gives a message as follows when I select Ext: "The partition is misaligned by 1024 bytes. This may result in very poor performance. Repartitioning is suggested." There were couple of problems during the install, which I describe below, in the case they happen to be relevant. Installer claimed that it recognized existing OS'es fine, so I checked the corresponding option during the install. Next, when it asked me how to allocate the disk space, the first weird thing happened: the installer give me a graphical "slide" allocate disk space for pre-existing Win7 OS and new Ubuntu... but it did not inform me which partition would be for Ubuntu and which for Windows. ..well, I decided to go with the setting installer proposed. (not sure if this is relevant, but I guess I'd better mention it anyway - the previous partition tools have been more self-explanatory...) After the install (which reported no errors), GRUB/Ubuntu refused to boot. Luckily this problem was quite straightforwardly resolved with live-Ubuntu-USB and Boot-Repair ("Recommended repair" worked just fine). After all this hassle I decided to check the partition table "just to be sure"- and the disk utility gives the warning message I described.

    Read the article

  • How will closures in Java impact the Java Community?

    - by Ryan Delucchi
    It is one of the most talked about features planned for Java: Closures. Many of us have been longing for them. Some of us (including I) have grown a bit impatient and have turned to scripting languages to fill the void. But, once closures have finally arrived to Java: how will they effect the Java Community? Will the advancement of VM-targetted scripting languages slow to a crawl, stay the same, or acclerate? Will people flock to the new closure syntax, thus turning Java code-bases all-around into more functionally structured implementations? Will we only see closures sprinkled in Java throughout? What will be the effect on tool/IDE support? How about performance? And finally, what will it mean for Java's continued adoption, as a language, compared with other languages that are rising in popularity? To provide an example of one of the latest proposed Java Closure syntax specs: public interface StringOperation { String invoke(String s); } // ... (new StringOperation() { public invoke(String s) { new StringBuilder(s).reverse().toString(); } }).invoke("abcd"); would become ... String reversed = { String s => new StringBuilder(s).reverse().toString() }.invoke("abcd"); [source: http://tronicek.blogspot.com/2007/12/closures-closure-is-form-of-anonymous_28.html]

    Read the article

  • Android - big game universe

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • Are Windows partitions gone?

    - by Gigili
    I had Windows 7 on my laptop (factory setting), because of some performance issues, I decided to use recovery options to restore it to its factory condition but I don't know what has happened or what I have done that the whole operating system was gone after playing around with recovery options from the boot menu. I couldn't find Windows, so I installed Ubuntu 11.04 on my laptop. Last time I had Ubuntu on it, it was not really compatible with laptop's configuration and I had a bit of problems trying to do normal tasks I used to do on Windows. Now I want to make sure that Windows and its drivers are gone so that I can try to install a newer version of Ubuntu or Windows. I tried the command sudo fdisk -l And the result shown was: myaccount@myaccount-VPCS116FG:~$ sudo fdisk -l [sudo] password for myaccount: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00025b5f Device Boot Start End Blocks Id System /dev/sda1 * 1 38409 308515840 83 Linux /dev/sda2 38409 38914 4052993 5 Extended /dev/sda5 38409 38914 4052992 82 Linux swap / Solaris Disk /dev/dm-0: 4150 MB, 4150263808 bytes 255 heads, 63 sectors/track, 504 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xa668cfe8 Disk /dev/dm-0 doesn't contain a valid partition table Is it gone? If not, what command should I try to have access to Windows partitions? Thank you.

    Read the article

  • Using C++ but not using the language's specific features, should switch to C?

    - by Petruza
    I'm developing a NES emulator as a hobby, in my free time. I use C++ because is the language I use mostly, know mostly and like mostly. But now that I made some advance into the project I realize I'm not using almost any specific features of C++, and could have done it in plain C and getting the same result. I don't use templates, operator overloading, polymorphism, inheritance. So what would you say? should I stay in C++ or rewrite it in C? I won't do this to gain in performance, it could come as a side effect, but the idea is why should I use C++ if I don't need it? The only features of C++ I'm using is classes to encapsulate data and methods, but that can be done as well with structs and functions, I'm using new and delete, but could as well use malloc and free, and I'm using inheritance just for callbacks, which could be achieved with pointers to functions. Remember, it's a hobby project, I have no deadlines, so the overhead time and work that would require a re-write are not a problem, might be fun as well. So, the question is C or C++?

    Read the article

  • Graphics hardware warning when updating to 14.04

    - by pacomet
    As I use Ubuntu at work I just update only LTS versions but now I'm not sure if I can/should. As my computer is now ten years old I would change if was mine but as it is owned by my employer I have to work with it. It's not a bad one, it runs fine (this was not true when still had Windows on it ;-). When updating to 14.04, it warns about possible bad/slow performance with Unity 3D so I stop updating as I am at work, not my own computer. As I understand from http://askubuntu.com/a/438958/25305 Nvidia Geforce FX 5500 graphics card is still supported in 14.04. Now, in 12.04, I have driver version 173 and unity 2d runs fine for me. output of /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce FX 5500/AGP/SSE2 OpenGL version string: 2.1.2 NVIDIA 173.14.39 Not software rendered: yes Not blacklisted: no GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no Should I update? Is it better to stay with 12.04? Thanks

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >