Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 5/1330 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Database design grouping contacts by lists and companies

    - by Serge
    Hi, I'm wondering what would be the best way to group contacts by their company. Right now a user can group their contacts by custom created lists but I'd like to be able to group contacts by their company as well as store the contact's position (i.e. Project Manager of XYZ company). Database wise this is what I have for grouping contacts into lists contact [id_contact] [int] PK NOT NULL, [lastName] [varchar] (128) NULL, [firstName] [varchar] (128) NULL, ...... contact_list [id_contact] [int] FK, [id_list] [int] FK, list [id_list] [int] PK [id_user] [int] FK [list_name] [varchar] (128) NOT NULL, [description] [TEXT] NULL Should I implement something similar for grouping contacts by company? If so how would I store the contact's position in that company and how can I prevent data corruption if a user modifies a contact's company name. For instance John Doe changed companies but the other co-workers are still in the old company. I doubt that will happen often (might not even happen at all) but better be safe than sorry. I'm also keeping an audit trail so in a way the contact would still need to be linked to the old company as well as the new one but without confusing what company he's actually working at the moment. I hope that made sense... Has anyone encountered such a problem? UPDATE Would something like this make sense contact_company [id_contact_company] [int] PK [id_contact] [int] FK [id_company] [int] FK [contact_title] [varchar] (128) company [id_company] [int] PK NOT NULL, [company_name] [varchar] (128) NULL, [company_description] [varchar] (300) NULL, [created_date] [datetime] NOT NULL This way a contact can work for more than one company and contacts can be grouped by companies

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Database Developer - October 2013 issue: Download Database 12c and related products

    - by Javier Puerta
    The October issue of the Database Application Developer  newsletter is now available. The focus of this issue is on downloads of Database 12c and related products. (Full newsletter here) Get Ready to Download, Deploy and Develop for Oracle Database 12c This month we're focused on downloads. We've rounded up the top developer releases (both early adopter and BETA releases) and the articles that will help you do more with Oracle 12c. See the technical content that will help you get started. If you're ready...Away we go! — Laura Ramsey, Database and Developer Community, Oracle Technology Network Team FEATURED DOWNLOADS Download: Oracle Database 12c According Tom Kyte, the Oracle 12c version has some of the biggest enhancements to the core database since version 6 - Check it out for yourself. Download: Oracle SQL Developer 4.0 Early Adopter 2 is Here Oracle SQL Developer is a free IDE that simplifies the development and management of Oracle Database. It is a complete end-to-end development platform for your PL/SQL applications that features a worksheet for running queries and scripts, a DBA console for managing the database, a reports interface, a complete data modeling solution and a migration platform for moving your 3rd party databases to Oracle.  If you are interested in checking out this new early adopter version,Oracle SQL Developer 4.0 EA is the place to go. Download: Oracle 12c Multitenant Self Provisioning Application -BETA- The -BETA- is here. The Multitenant self provisioning Application is an easy and productive way for DBAs and Developers to get familiar with powerful PDB features including create, clone, plug and unplug.   No better time to start playing with PDBs. Oracle 12c Multitenant Self Provisioning Application. Download: New! Updates to Oracle Data Integration Portfolio Oracle GoldenGate 12c and Oracle Data Integrator 12c is now available. From Real-Time data integration, transactional change data capture, data replication, transformations....to hi-volume, high-performance batch loads, event-driven, trickle-feed integration process..its now available. Go here all the details and links to downloads...and Congratulations Data Integration Team!. Download: Oracle VM Templates for Oracle 12c Features Support for Single Instance, Oracle Restart and Oracle RAC Support for all current Oracle Database 11.2 versions as well as Oracle 12c on Oracle Linux 5 Update 9 & Oracle Linux 6 Update 4. The Oracle 12c templates allow end-to-end automation for Flex Cluster, Flex ASM and PDBs. See how the Deploycluster tool was updated to support Single Instance and the new Oracle 12c features. Oracle VM Templates for Oracle Database. Download: Oracle SQL Developer Data Modeler 4.0 EA 3 If you're looking for a datamodeling and database design tool that provides an environment for capturing, modeling, managing and exploiting metadata, it's time to check out Oracle SQL Developer Data Modeler. Oracle SQL Developer Data Modeler 4.0 EA V3 is here.

    Read the article

  • Suggestions for programming language and database for a high end database querying system (>50 milli

    - by mmdave
    These requirements are sketchy at the moment, but will appreciate any insights. We are exploring what would be required to build a system that can handle 50 database million queries a day - specifiically from the programming language and database choice Its not a typical website, but an API / database accessing through the internet. Speed is critical. The application will primarily receive these inputs (about a few kb each) and will have to address each of them via the database lookup. Only a few kb will be returned. The server will be run over https/ssl.

    Read the article

  • Instructor Insight: Using the Container Database in Oracle Database 12 c

    - by Breanne Cooley
    The first time I examined the Oracle Database 12c architecture, I wasn’t quite sure what I thought about the Container Database (CDB). In the current release of the Oracle RDBMS, the administrator now has a choice of whether or not to employ a CDB. Bundling Databases Inside One Container In today’s IT industry, consolidation is a common challenge. With potentially hundreds of databases to manage and maintain, an administrator will require a great deal of time and resources to upgrade and patch software. Why not consider deploying a container database to streamline this activity? By “bundling” several databases together inside one container, in the form of a pluggable database, we can save on overhead process resources and CPU time. Furthermore, we can reduce the human effort required for periodically patching and maintaining the software. Minimizing Storage Most IT professionals understand the concept of storage, as in solid state or non-rotating. Let’s take one-to-many databases and “plug” them into ONE designated container database. We can minimize many redundant pieces that would otherwise require separate storage and architecture, as was the case in previous releases of the Oracle RDBMS. The data dictionary can be housed and shared in one CDB, with individual metadata content for each pluggable database. We also won’t need as many background processes either, thus reducing the overhead cost of the CPU resource. Improve Security Levels within Each Pluggable Database  We can now segregate the CDB-administrator role from that of the pluggable-database administrator as well, achieving improved security levels within each pluggable database and within the CDB. And if the administrator chooses to use the non-CDB architecture, everything is backwards compatible, too.  The bottom line: it's a good idea to at least consider using a CDB. -Christopher Andrews, Senior Principal Instructor, Oracle University

    Read the article

  • Oracle Database Appliance Setup Poster Updated

    - by Ravi.Sharma
    The newly updated Setup Poster for Oracle Database Appliance is now available at http://wd0338.oracle.com/archive/cd_ns/E22693_01/index.htm This updated poster is a comprehensive source of information for anyone planning to deploy Oracle Database Appliance. It includes two main sections (which are conveniently printed on the two sides of a single 11x17 page) 1. Preparing to Deploy Oracle Database Appliance2. Oracle Database Appliance Setup The Preparing to Deploy Oracle Database Appliance section provides a concise list of items to plan for and review before beginning deployment. This includes registering Support Identifiers, allocating IP addresses, downloading software and patches, choosing configuration options, as well as important links to useful information. The Oracle Database Appliance Setup section provides a step by step procedure for deploying and configuring Oracle Database Appliance. This includes initial powering up of Oracle Database Appliance, configuring initial network, downloading software and completing the configuration using Oracle Database Appliance Configurator (GUI)  

    Read the article

  • Oracle Database Machine and Exadata Storage Server

    - by jean-marc.gaudron(at)oracle.com
    Master Note for Oracle Database Machine and Exadata Storage Server (Doc ID 1187674.1)This Master Note is intended to provide an index and references to the most frequently used My Oracle Support Notes with respect to Oracle Exadata and Oracle Database Machine environments. This Master Note is subdivided into categories to allow for easy access and reference to notes that are applicable to your area of interest. This includes the following categories: • Database Machine and Exadata Storage Server Concepts and Overview• Database Machine and Exadata Storage Server Configuration and Administration• Database Machine and Exadata Storage Server Troubleshooting and Debugging• Database Machine and Exadata Storage Server Best Practices• Database Machine and Exadata Storage Server Patching• Database Machine and Exadata Storage Server Documentation and References• Database Machine and Exadata Storage Server Known Problems• ASM and RAC Documentation• Using My Oracle Support Effectively

    Read the article

  • What is the best database for my needs?

    - by Mr. Flibble
    I am currently using MS SQL Server 2008 but I'm not sure it it is the best system for this particular task. I have a single table like so: PK_ptA PK_ptB DateInserted LookupColA LookupColB ... LookupColF DataCol (ntext) A common query is SELECT TOP(1000000) DataCol FROM table WHERE LookupColA=x AND LookupColD=y AND LookupColE=z ORDER BY DateInserted DESC The table has about a billion rows with 5 million inserted per day. My main problem with SQL Server is that it isn't too easy to shard or spread out the datafiles. Also, exporting seems to max out at 1000rows per second (about 1MB/s) which seems very slow. Another problem I have is, with SQL Server, if I want to add a new LookupCol the log file grows enormously requiring a large amount of rarely used free space on tap. Are there any obvious better solutions for this problem?

    Read the article

  • Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud

    - by Javier Puerta
    Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud REDWOOD SHORES, Calif. – July 1, 2013 News Summary As organizations embrace the cloud, they seek technologies that will transform business and improve their overall operational agility and effectiveness. Oracle Database 12c is a next-generation database designed to meet these needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform. By plugging into the cloud with Oracle Database 12c, customers can improve the quality and performance of applications, save time with maximum availability architecture and storage management and simplify database consolidation by managing hundreds of databases as one. Read full press release

    Read the article

  • Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud

    - by Javier Puerta
    Oracle Announces General Availability of Oracle Database 12c, the First Database Designed for the Cloud REDWOOD SHORES, Calif. – July 1, 2013 News Summary As organizations embrace the cloud, they seek technologies that will transform business and improve their overall operational agility and effectiveness. Oracle Database 12c is a next-generation database designed to meet these needs, providing a new multitenant architecture on top of a fast, scalable, reliable, and secure database platform. By plugging into the cloud with Oracle Database 12c, customers can improve the quality and performance of applications, save time with maximum availability architecture and storage management and simplify database consolidation by managing hundreds of databases as one. Read full press release  

    Read the article

  • Single value data to multiple values of data in database relation

    - by Sofiane Merah
    I have such a hard time picturing this. I just don't have the brain to do it. I have a table called reports. --------------------------------------------- | report_id | set_of_bads | field1 | field2 | --------------------------------------------- | 123 | set1 | qwe | qwe | --------------------------------------------- | 321 | 123112 | ewq | ewq | --------------------------------------------- I have another table called bads. This table contains a list of bad data. ------------------------------------- | bad_id | set_it_belongs_to | field2 | field3 | ------------------------------------- | 1 | set1 | qwe | qwe | ------------------------------------- | 2 | set1 | qee | tte | ------------------------------------- | 3 | set1 | q44w | 3qwe | ------------------------------------- | 4 | 234 | qoow | 3qwe | ------------------------------------- Now I have set the first field of every table as the primary key. My question is, how do I connect the field set_of_bads to set_it_belongs_to in the bads table. This way if I want to get the entire set of data that is set1 by calling on the reports table I can do it. Example: hey reports table.. bring up the row that has the report_id 123. Okay thank you.. Now get all the rows from bads that has the set_of_bads value from the row with the report_id 123. Thanks.

    Read the article

  • Is it a bad practice to store large files (10 MB) in a database?

    - by B Seven
    I am currently creating a web application that allows users to store and share files, 1 MB - 10 MB in size. It seems to me that storing the files in a database will significantly slow down database access. Is this a valid concern? Is it better to store the files in the file system and save the file name and path in the database? Are there any best practices related to storing files when working with a database? I am working in PHP and MySQL for this project, but is the issue the same for most environments (Ruby on Rails, PHP, .NET) and databases (MySQL, PostgreSQL).

    Read the article

  • ????????!Oracle Database 12c ??????????

    - by OTN-J Master
    ??????Oracle Database 12c ?????????????¦Oracle Database 12c ???????????????????????????????????c???cloud???????????????????????????????????????????????????????????????????????????????????????1????????????????????????????????????2????????????1????????????IT??????????????????????????????????????????????????????????1???????????????????????????????????????????????????Oracle Databae 12c ????????????????????????·??????????????????????????????????????????·??????????????????????????????????? Oracle Database 12c ?????? >>>>>>Oracle??????? Oracle Database 12c [????] >>>>>>OTN????? Oracle Database 12c [????] >>>>>>Oracle Database New Features Guide 12c Release1(??) [PDF?] [HTML?] ¦OTN???????????????????????????????????????????????Oracle Database 12c ?????????????????????????????????????????????????????????    1) ??????????????????     2) ??????(????)???????????    3) ???????????(??)?????????????    4) Oracle Databse 12c ????????????????????????????????????????????OTN?????????????Oracle Database 12c ??????????????????OTN?????????????????????OTN????RSS???OTN Twitter?????????????????????????????Oracle Database 12c???????????????

    Read the article

  • Effective backup and archive strategy for database and linked files

    - by busyspin
    I am using Postgres to store a variety of application data for a webapp. Part of the application involves storing and retrieving user uploaded files. I am storing the files in the filesystem with some associated metadata in the database. I am trying to come up with a backup and archive strategy so that I can effectively backup and archive/restore the database and the linked files. Here are the things I want to accomplish. Perform routine backups that can be used for recovery from failures and which include all DB data and the linked files. Ideally, this backup would be done while the app is running. Live backup is certainly possible with a DB but I am not sure how to keep the linked files consistent with the database during the backup process Archive chunks of data as they become "old". These chunks must includes the database data plus any linked files. It should be possible to put the archived data back into production again. It would be ideal if it were easy to determine which ranges of objects were stored in each chunk. Do you have any advice for how to accomplish these goals? If the files were in the database as BLOBS these tasks would be much easier since normal database backup and restore functionality would handle this. I am not sure how to accomplish the same thing when file data is linked to database rows.

    Read the article

  • The MYSTERY SPIRAL

    - by CVS26
    Problem statement: Given a integer N, print N*N numbers in a N x N spiral Detailed problem description: http://2600hertz.wordpress.com/2010/03/20/the-mystery-spiral/ Solution: Recently posted the following code. (managed to compress it into as few as 99 lines...) //File : spiral.c // //INPUT : Size of spiral (N) //OUTPUT : Numbers printed in a N x N spiral #include <stdio.h> #include <conio.h> #include <stdlib.h> void main() { int N; clrscr(); //get input no. N printf("\nEnter size of Matrix: "); scanf("%d",&N); //Allocate reqd. memory int* matrix_ptr= (int*)malloc(N*N); //Filling the Matrix spirally int curr_val=N*N; int* curr_ptr=matrix_ptr; int curr_level=N; while(curr_level>1) { //curr_level-1 elements horizontally //from left to right for(int x=0;x<curr_level-1;x++) { *curr_ptr=curr_val; curr_val--; curr_ptr++; } //curr_level-1 elements vertically //from top to bottom for(int y=0;y<curr_level-1;y++) { *curr_ptr=curr_val; curr_val--; curr_ptr+=N; } //curr_level-1 elements horizontally //from right to left for(int z=0;z<curr_level-1;z++) { *curr_ptr=curr_val; curr_val--; curr_ptr--; } //curr_level-1 element vertically //from bottom to top for(int w=0;w<curr_level-1;w++) { *curr_ptr=curr_val; curr_val--; curr_ptr-=N; } //Next curr_ptr+=N+1; curr_level-=2; } *curr_ptr=curr_val; //routine to print the matrix printf("\n\n\n\n\n"); for( int i=0;i<N;i++) { for( int j=0;j<N;j++) { printf("%d\t",*(matrix_ptr+(i*N+j))); } printf("\n"); } getch(); } Please comment on further optimisations (if any)...

    Read the article

  • ???????/??????????!? Oracle?? ?????

    - by Yusuke.Yamamoto
    ????? ??:2010/12/01 ??:??????/?? ???? Oracle Database ???????????????????Oracle Database ??????????????????????????? Oracle Database ?????! Oracle Database ???/ Oracle Database ???Oracle Database ?????Oracle Database ????????????:??????·???/??????·???????????????????Oracle Database ???????/ ??????????????????????/??????????????????????????????????????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/Nyumon12011100.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/20101201_Oracle_Beginner.pdf

    Read the article

  • ???????/??????????!? Oracle?? ??????????

    - by Yusuke.Yamamoto
    ????? ??:2010/12/08 ??:??????/?? ???? Oracle Database ??????????????????????????? Oracle Database ????????????Oracle Database ??????????????????????????????????????? Oracle Database ???/ Oracle Database ??????Oracle Database ???????Oracle Database ???/ ????????????Oracle Database ???????(??)/ ???????????????????????????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/Nyumon12081100.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/1208_1100_Oracle_Beginner_architecture.pdf

    Read the article

  • Better way to design a database

    - by cMinor
    I have a conceptual problem and I would like to get your ideas on how I'll be able to do what I am aiming. My goal is to create a database with information of persons who work at a place depending on their profession and skills,and keep control of salary and projects (how much would cost summing all the hours of work) I have 3 categories which can have subcategories: Outsourcing Technician welder turner assistant Administrative supervisor manager So each person has its information and the projects they are working on, also one person may do several jobs... I was thinking about having 5 tables (EMPLOYEE, SKILLS, PROYECTS, SALARY, PROFESSION) but I guess there is a better way of doing this. create table Employee ( PRIMARY KEY [Person_ID] int(10), [Name] varchar(30), [sex] varchar(10), [address] varchar(10), [profession] varchar(10), [Skills_ID] int(10), [Proyect_ID] int(10), [Salary_ID] int(10), [Salary] float ) create table Skills ( PRIMARY KEY [Skills_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID), [Skills_pay] float(10), [Comments] varchar(50) ) create table Proyects ( PRIMARY KEY [Proyect_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) create table Salary ( PRIMARY KEY [Salary_ID] int(10), FOREIGN KEY [Skills_name] varchar(10) REFERENCES Employee(Person_ID) [Proyect_name] varchar(10), [working_Hours] float(10), [Comments] varchar(50) ) So to get the total amount of the cost of a project I would just sum the working hours of each employee envolved and sum some extra costs in an aggregate query. Is there a way to do this in a more efficient way? What to add or delete of this small model? I guess I am missing something in the salary - maybe I need another table for that?

    Read the article

  • Unity3D draw call optimization : static batching VS manually draw mesh with MaterialPropertyBlock

    - by Heisenbug
    I've read Unity3D draw call batching documentation. I understood it, and I want to use it (or something similar) in order to optimize my application. My situation is the following: I'm drawing hundreds of 3d buildings. Each building can be represented using a Mesh (or a SubMesh for each building, but I don't thing this will affect performances) Each building can be textured with several combinations of texture patterns(walls, windows,..). Textures are stored into an Atlas for optimizaztion (see Texture2d.PackTextures) Texture mapping and facade pattern generation is done in fragment shader. The shader can be the same (except for few values) for all buildings, so I'd like to use a sharedMaterial in order to optimize parameters passed to the GPU. The main problem is that, even if I use an Atlas, share the material, and declare the objects as static to use static batching, there are few parameters(very fews, it could be just even a float I guess) that should be different for every draw call. I don't know exactly how to manage this situation using Unity3D. I'm trying 2 different solutions, none of them completely implemented. Solution 1 Build a GameObject for each building building (I don't like very much the overhead of a GameObject, anyway..) Prepare each GameObject to be static batched with StaticBatchingUtility.Combine. Pack all texture into an atlas Assign the parent game object of combined batched objects the Material (basically the shader and the atlas) Change some properties in the material before drawing an Object The problem is the point 5. Let's say I have to assign a different id to an object before drawing it, how can I do this? If I use a different material for each object I can't benefit of static batching. If I use a sharedMaterial and I modify a material property, all GameObjects will reference the same modified variable Solution 2 Build a Mesh for every building (sounds better, no GameObject overhead) Pack all textures into an Atlas Draw each mesh manually using Graphics.DrawMesh Customize each DrawMesh call using a MaterialPropertyBlock This would solve the issue related to slightly modify material properties for each draw call, but the documentation isn't clear on the following point: Does several consecutive calls to Graphic.DrawMesh with a different MaterialPropertyBlock would cause a new material to be instanced? Or Unity can understand that I'm modifying just few parameters while using the same material and is able to optimize that (in such a way that the big atlas is passed just once to the GPU)?

    Read the article

  • Looking for literature about graphics pipeline optimization

    - by zacharmarz
    I am looking for some books, articles or tutorials about graphics architecture and graphics pipeline optimizations. It shouldn't be too old (2008 or newer) - the newer, the better. I have found something in [Optimising the Graphics Pipeline, NVIDIA, Koji Ashida] - too old, [Real-time rendering, Akenine Moller], [OpenGL Bindless Extensions, NVIDIA, Jeff Bolz], [Efficient multifragment effects on graphics processing units, Louis Frederic Bavoil] and some internet discussions. But there is not too much information and I want to read more. It should contain something about application, driver, memory and shader units communication and data transfers. About vertices and attributes. Also pre and post T&L cache (if they still exist in nowadays architectures) etc. I don't need anything about textures, frame buffers and rasterization. It can also be about OpenGL (not about DirecX) and optimizing extensions (not old extensions like VBOs, but newer like vertex_buffer_unified_memory).

    Read the article

  • Euler Problem 1 : Code Optimization / Alternatives [on hold]

    - by Sudhakar
    I am new bee into the world of Datastructures and algorithms from ground up. This is my attempt to learn. If the question is very plain/simple . Please bear with me. Problem: Find the sum of all the multiples of 3 or 5 below 1000. Code i worte: package problem1; public class Problem1 { public static void main(String[] args) { //******************Approach 1**************** long start = System.currentTimeMillis(); int total = 0; int toSubtract = 0; //Complexity N/3 int limit = 10000; for(int i=3 ; i<limit ;i=i+3){ total = total +i; } //Complexity N/5 for(int i=5 ; i<limit ;i=i+5){ total = total +i; } //Complexity N/15 for(int i=15 ; i<limit ;i=i+15){ toSubtract = toSubtract +i; } //9N/15 = 0.6 N System.out.println(total-toSubtract); System.out.println("Completed in "+(System.currentTimeMillis() - start)); //******************Approach 2**************** for(int i=3 ; i<limit ;i=i+3){ total = total +i; } for(int i=5 ; i<limit ;i=i+5){ if ( 0 != (i%3)) total = total +i; } } } Question 1 - Which best approach from the above code and why ? 2 - Are there any better alternatives ?

    Read the article

  • Form development optimization

    - by Juan
    Like many web developers I do forms all the time. I found myself doing the same all the time: placing input fields, assigning a name to each, ajax the form, then create the PHP which involves to assign a PHP var to each $_REQUEST['var'], escape and validate data, build the html and emailing the results. So I found that 70% of the work is duplicated but I just can't duplicate a page and change the fields. I end up wasting more time reformatting, deleting and adding different fields than creating from scratch. I started planing to program a "list of IDs to html+php" converter in which I'd input all the IDs and this would output the basic html and php. Then I thought: there's got to be thousands of developers that go through this, I'd be reinventing the wheel. So this is my question, I'm trying to find that wheel that somebody must have invented already. I found this: http://www.trirand.com/blog/jqform/ which does more or less what I'm looking for but it's an expensive solution and it has too much functionality for what I'd be using it. Which tools do you use to optimize repetitive task about HTML and PHP?

    Read the article

  • GPU optimization question: pre-computed or procedural?

    - by Jay
    Good morning, I'm learning shader program and need some general direction. I want to add noise to my laser beam (like this). Which is the best way to handle it? I could pre-compute an image and pass it to the shader. I could then use the image to change the opacity and easily animate the smoke by changing the offset of the texture lookup. I could also generate noise in the shader and do the same thing the texture was used for. Is it generally better to avoid I/O to the graphics card or the opposite? Thanks!

    Read the article

  • Sqlite Database LEAK FOUND exception in android?

    - by androidbase
    hi all, i am getting this exception in database Leak Found my LOGCAT Shows this: 02-17 17:20:37.857: INFO/ActivityManager(58): Starting activity: Intent { cmp=com.example.brown/.Bru_Bears_Womens_View (has extras) } 02-17 17:20:38.477: DEBUG/dalvikvm(434): GC freed 1086 objects / 63888 bytes in 119ms 02-17 17:20:38.556: ERROR/Database(434): Leak found 02-17 17:20:38.556: ERROR/Database(434): java.lang.IllegalStateException: /data/data/com.example.brown/databases/BRUNEWS_DB_01.db SQLiteDatabase created and never closed 02-17 17:20:38.556: ERROR/Database(434): at android.database.sqlite.SQLiteDatabase.<init>(SQLiteDatabase.java:1694) 02-17 17:20:38.556: ERROR/Database(434): at android.database.sqlite.SQLiteDatabase.openDatabase(SQLiteDatabase.java:738) 02-17 17:20:38.556: ERROR/Database(434): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:760) 02-17 17:20:38.556: ERROR/Database(434): at android.database.sqlite.SQLiteDatabase.openOrCreateDatabase(SQLiteDatabase.java:753) 02-17 17:20:38.556: ERROR/Database(434): at android.app.ApplicationContext.openOrCreateDatabase(ApplicationContext.java:473) 02-17 17:20:38.556: ERROR/Database(434): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:193) 02-17 17:20:38.556: ERROR/Database(434): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:98) 02-17 17:20:38.556: ERROR/Database(434): at com.example.brown.Brown_Splash.onCreate(Brown_Splash.java:52) 02-17 17:20:38.556: ERROR/Database(434): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 02-17 17:20:38.556: ERROR/Database(434): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 02-17 17:20:38.556: ERROR/Database(434): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 02-17 17:20:38.556: ERROR/Database(434): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 02-17 17:20:38.556: ERROR/Database(434): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 02-17 17:20:38.556: ERROR/Database(434): at android.os.Handler.dispatchMessage(Handler.java:99) 02-17 17:20:38.556: ERROR/Database(434): at android.os.Looper.loop(Looper.java:123) 02-17 17:20:38.556: ERROR/Database(434): at android.app.ActivityThread.main(ActivityThread.java:4363) 02-17 17:20:38.556: ERROR/Database(434): at java.lang.reflect.Method.invokeNative(Native Method) 02-17 17:20:38.556: ERROR/Database(434): at java.lang.reflect.Method.invoke(Method.java:521) 02-17 17:20:38.556: ERROR/Database(434): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 02-17 17:20:38.556: ERROR/Database(434): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 02-17 17:20:38.556: ERROR/Database(434): at dalvik.system.NativeStart.main(Native Method) how can i solve it??? thanks in advance...

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >