Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 32/307 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Explicitly typing variables causes compiler to think an instance of a builtin type doesn't have a pr

    - by wallacoloo
    I narrowed the causes of an AS3 compiler error 1119 down to code that looks similar to this: var test_inst:Number = 2.953; trace(test_inst); trace(test_inst.constructor); I get the error "1119: Access of possibly undefined property constructor through a reference with static type Number." Now if I omit the variable's type, I don't get that error: var test_inst = 2.953; trace(test_inst); trace(test_inst.constructor); it produces the expected output: 2.953 [class Number] So what's the deal? I like explicitly typing variables, so is there any way to solve this error other than not providing the variable's type?

    Read the article

  • Is C# compiler not reporting all errors at once at each compile?

    - by Joan Venge
    When I am compiling this project, it show like 400+ errors in the Error List window, then I go to error sites, fix some, and the number goes to say 120+ errors, and then after fixing some more, the next compile reports like 400+ again. I can see that different files are coming in in the Error List window, so I am thinking the compiler aborts after a certain number of errors? If so, what's the reason for this? Is it not supposed to gather all the errors that are present in the project even if they are over 10K+?

    Read the article

  • Shall I optimize or let compiler to do that?

    - by Knowing me knowing you
    What is the preferred method of writing loops according to efficiency: Way a) /*here I'm hoping that compiler will optimize this code and won't be calling size every time it iterates through this loop*/ for (unsigned i = firstString.size(); i < anotherString.size(), ++i) { //do something } or maybe should I do it this way: Way b) unsigned first = firstString.size(); unsigned second = anotherString.size(); and now I can write: for (unsigned i = first; i < second, ++i) { //do something } the second way seems to me like worse option for two reasons: scope polluting and verbosity but it has the advantage of being sure that size() will be invoked once for each object. Looking forward to your answers.

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • org.apache.jasper.JasperException .... Unterminated &lt;%@ page tag

    - by Ankur
    I get org.apache.jasper.JasperException: /index.jsp(2,1) Unterminated <%@ page tag The page tags look like this: <%@ page import="java.util.*" %> <%@ page import="au.edu.uwa.peb.autoextractor.model.ScanResultItem"; %> This seems to indicate to me that a < does not have a corresponding tag ... is this so ... my IDE does not highlight any errors so how can I find this unterminated tag. Is there a JSP validation tool that I can use, perhaps online? The stack trace looks like this: org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:40) org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:407) org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:132) org.apache.jasper.compiler.Parser.parseDirective(Parser.java:520) org.apache.jasper.compiler.Parser.parseTagFileDirectives(Parser.java:1784) org.apache.jasper.compiler.Parser.parse(Parser.java:127) org.apache.jasper.compiler.ParserController.doParse(ParserController.java:255) org.apache.jasper.compiler.ParserController.parseDirectives(ParserController.java:120) org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:165) org.apache.jasper.compiler.Compiler.compile(Compiler.java:332) org.apache.jasper.compiler.Compiler.compile(Compiler.java:312) org.apache.jasper.compiler.Compiler.compile(Compiler.java:299) org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:586) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:342) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:267) javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

    Read the article

  • Optimize Images Using the ASP.NET Sprite and Image Optimization Framework

    The HTML markup of a web page includes the page's textual content, semantic and styling information, and, typically, several references to external resources. External resources are content that is part of web page, but are separate from the web page's markup - things like images, style sheets, script files, Flash videos, and so on. When a browser requests a web page it starts by downloading its HTML. Next, it scans the downloaded HTML for external resources and starts downloading those. A page with many external resources usually takes longer to completely load than a page with fewer external resources because there is an overhead associated with downloading each external resource. For starters, each external resource requires the browser to make an HTTP request to retrieve the resource. What's more, browsers have a limit as to how many HTTP requests they will make in parallel. For these reasons, a common technique for improving a page's load time is to consolidate external resources in a way to reduce the number of HTTP requests that must be made by the browser to load the page in its entirety. This article examines the free and open-source ASP.NET Sprite and Image Optimization Framework, which is a project developed by Microsoft for improving a web page's load time by consolidating images into a sprite or by using inline, base-64 encoded images. In a nutshell, this framework makes it easy to implement practices that will improve the load time for a web page that displays several images. Read on to learn more! Read More >

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by Brian Dayton
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage.   2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.."   3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what.   Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.  

    Read the article

  • Flash AS3 sidescrolling tiles optimization

    - by Galvanize
    I'm trying to make a sidescrolling game in Flash that will run on a low performance laptop. While studying the subject from Tonypa I saw that he builds a Bitmap by making copys of the BitmapData of each tile from the Tile Sheet and placing it on the bigger Bitmat with the size of the screen. But when I came to think on how to scroll my map I ran into some optimization doubts. I came up with two choices: Create a MovieClip, place a Bitmap instance for each tile that is shown on the screen + 1 row in it, then move them all. Then when the tile ran off the screen I would move it to end of the MovieClip and replace their BitmapData for the next row in my map. Use a Bitmap with copys of each tile in it (as shown in Tonypa's tutorial) but 1 extra row, move the whole Bitmap, and when it comes the time to replace rows, redraw the whole Bitmap and move it back to the origin position. The first idea is how a co-worker of mine suggested, the second one is my own, but none of us has enough technical knowledge to be sure on a technique that would be optimal in performance, can anyone help?

    Read the article

  • Oracle Vanquisher: A Data Center Optimization Adventure to Debut at Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    Heat. Downtime. Site-wide outages. Legacy hardware. Security holes. These are all threats to your data center. What if you could vanquish them to simplify your IT and accelerate business innovation and growth? Find out how - play Oracle Vanquisher, a new data center optimization video game that will be showcased at Oracle OpenWorld (Hardware DEMOgrounds, Moscone South Hall).Playing Oracle Vanquisher, you'll be armed with a cool Oracle vacuum pack suit and a strategic IT roadmap. You'll thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position. Of course, optimizing your data center is far more than a great game. For more information, visit the Oracle Optimized Data Center homepage or check out these targeted Oracle OpenWorld keynotes and sessions:KeynotesShift Complexity, with Oracle President Mark HurdMonday, October 1, 8:00 a.m. - 9:30 a.m.Moscone North, Hall DOracle Cloud Infrastructure and Engineered Systems: Fast, Reliable, Virtualized, with Oracle Executive Vice President John FowlerWednesday, October 3, 8:00 a.m. - 9:45 a.m.Moscone North, Hall DSessions Oracle Linux Oracle Optimized Solutions Oracle Solaris SPARC Servers Storage SPARC SuperCluster Oracle VM Server Virtualization Desktop Virtualization

    Read the article

  • IT Optimization Plan Pays Off For UK Retailer

    - by [email protected]
    I caught this article in ComputerworldUK yesterday. The headline talks about UK-based supermarket chain Morrisons is increasing their IT spend...OK, sounds good. Even nicer that Oracle is a big part of that. But what caught my eye were three things: 1) Morrison's truly has a long term strategy for IT. In this case, modernizing and optimizing how they use IT for business advantage. 2) Even in a tough economic climate, Morrison's views IT investments as contributing to and improving the bottom line. Specifically, "The investment in IT contributed to a 21 percent increase in Morrison's underlying profit.." 3) The phased, 3-year "Optimization Plan" took a holistic approach to their business--from CRM and Supply Chain systems to the underlying application infrastructure. On the infrastructure front, adopting a more flexible Service-Oriented Architecture enabled them to be more agile and adapt their business and Identity Management helped with sometimes mundane (but costly) issues like lost passwords and being able to document who has access to what. Things don't always turn out so rosy. And I know it was a long and difficult process...but it's nice to see a happy ending every once in a while.

    Read the article

  • OpenWorld Session: BPM Analytics - Process Dashboards, BAM and Intelligent Optimization

    - by Ajay Khanna
    Blog Contributed by Payal Srivastava, Oracle Product Mamagement   Margin of error for the business is shrinking dramatically. Business needs to accomplish more with less i.e. minimal investment with quick ROI. Learn how you can leverage Oracle BPM suite and complementary technologies to create a robust analytics capability to provide visibility into operations to  C-level executives and Operational managers. We will talk about BPM analytics options available today that will not only enhance the visibility but allow you to intelligently optimize the business process at design time as well as run time.  The session will share some exciting this on our roadmap.  Come meet with the Oracle team members  from Product Development (Avinash Dabholkar , Eric Hsiao) and Product Management (Payal Srivastava) at the session. We would like to hear  your questions/comments about  our offering and roadmap. BPM Analytics: Process Dashboards, BAM and Intelligent Optimization, Moscone South 308, 10/3/12 @11:45am – CON 8598

    Read the article

  • Bundling in visual studio 2012 for web optimization

    - by Jalpesh P. Vadgama
    I have been writing a series of posts about Visual Studio 2012 features. This series describes what are the new features in the Visual Studio 2012. This post will also be part of Visual Studio 2012 feature series. As we know now days web applications or site are providing more and more features and due to that we have include lots of JavaScript and CSS files in our web application.So once we load site then we will have all the JavaScript  js files and CSS files loaded in the browsers and If you have lots of JavaScript files then its consumes lots of time when browser request them. Following images show the same situation over there.   Here you can see total 25 files loaded into the system and it's almost more than 1MB of total size. As we need to have our web application of site very responsive and need to have high performance application/site, this will be a performance bottleneck to our site. In situation like this, the bundling feature of Visual Studio 2012 and ASP.NET 4.5 comes very handy. With the help of this feature we do optimization there and we can increase performance of our application. To enable this feature in Visual Studio 2012 we just made debug=”false” in web.config of our application like following. Now once you enable this feature and run this application in the browser to see your traffic it will have less items like following. As you can see in the above image there are only 8 items. So after enabling bundling it will automatically convert all js and css files into the one request. Isn’t that cool feature? This feature will surely going to have great impact on performance. Hope you like it. Stay tuned for more.. Till then happy programming!!

    Read the article

  • Can a conforming C# compiler optimize away a local (but unused) variable if it is the only strong re

    - by stakx
    The title says it all, but let me explain: void Case_1() { var weakRef = new WeakReference(new object()); GC.Collect(); // <-- doesn't have to be an explicit call; just assume that // garbage collection would occur at this point. if (weakRef.IsAlive) ... } In this code example, I obviously have to plan for the possibility that the new'ed object is reclaimed by the garbage collector; therefore the if statement. (Note that I'm using weakRef for the sole purpose of checking if the new'ed object is still around.) void Case_2() { var unusedLocalVar = new object(); var weakRef = new WeakReference(unusedLocalVar); GC.Collect(); // <-- doesn't have to be an explicit call; just assume that // garbage collection would occur at this point. Debug.Assert(weakReferenceToUseless.IsAlive); } The main change in this code example from the previous one is that the new'ed object is strongly referenced by a local variable (unusedLocalVar). However, this variable is never used again after the weak reference (weakRef) has been created. Question: Is a conforming C# compiler allowed to optimize the first two lines of Case_2 into those of Case_1 if it sees that unusedLocalVar is only used in one place, namely as an argument to the WeakReference constructor? i.e. is there any possibility that the assertion in Case_2 could ever fail?

    Read the article

  • Java Compiler Creation Help..Please

    - by Brian
    I need some help with my code here...What we are trying to do is make a compiler that will read a file containing Machine Code and converting it to 100 lines of 4 bits example: this code is the machine code being converting to opcode and operands. I need some help please.. thanks 799 798 198 499 1008 1108 899 909 898 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Everything compiles but when I go and run my Test.java I get the following OutPut: Exception in thread "main" java.util.NoSuchElementException: No line found at java.util.Scanner.nextLine(Scanner.java:1516) at Compiler.FirstPass(Compiler.java:22) at Compiler.compile(Compiler.java:11) at Test.main(Test.java:5) Here is my class Compiler: import java.io.*; import java.io.DataOutputStream; import java.util.NoSuchElementException; import java.util.Scanner; class Compiler{ private int lc = 0; private int dc = 99; public void compile(String filename) { SymbolList symbolTable = FirstPass(filename); SecondPass(symbolTable, filename); } public SymbolList FirstPass(String filename) { File file = new File(filename); SymbolList temp = new SymbolList(); int dc = 99; int lc = 0; try{ Scanner scan = new Scanner(file); String line = scan.nextLine(); String[] linearray = line.split(" "); while(line!=null){ if(!linearray[0].equals("REM")){ if(!this.isInstruction(linearray[0])){ linearray[0]=removeColon(linearray[0]); if(this.isInstruction(linearray[1])){ temp.add(new Symbol(linearray[0], lc, null)); lc++; } else { temp.add(new Symbol(linearray[0], dc, Integer.valueOf((linearr\ ay[2])))); dc--; } } else { if(!linearray[0].equals("REM")) lc++; } } try{ line = scan.nextLine(); } catch(NoSuchElementException e){ line=null; break; } linearray = line.split(" "); } } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } return temp; } public String makeFilename(String filename) { return filename + ".ex"; } public String removeColon(String str) { if(str.charAt(str.length()-1) == ':'){ return str.substring(0, str.length()-1); } else { return str; } } public void SecondPass(SymbolList symbolTable, String filename){ try { int dc = 99; //Open file for reading File file = new File(filename); Scanner scan = new Scanner(file); //Make filename of new executable file String newfile = makeFilename(filename); //Open Output Stream for writing new file. FileOutputStream os = new FileOutputStream(filename); DataOutputStream dos = new DataOutputStream(os); //Read First line. Split line by Spaces into linearray. String line = scan.nextLine(); String[] linearray = line.split(" "); while(scan.hasNextLine()){ if(!linearray[0].equals("REM")){ int inst=0, opcode, loc; if(isInstruction(linearray[0])){ opcode = getOpcode(linearray[0]); loc = symbolTable.searchName(linearray[1]).getMemloc(); inst = (opcode*100)+loc; } else if(!isInstruction(linearray[0])){ if(isInstruction(linearray[1])){ opcode = getOpcode(linearray[1]); if(linearray[1].equals("STOP")) inst=0000; else { loc = symbolTable.searchName(linearray[2]).getMemloc(); inst = (opcode*100)+loc; } } if(linearray[1].equals("DC")) dc--; } System.out.println(inst); dos.writeInt(inst); linearray = line.split(" "); } if(scan.hasNextLine()) { line = scan.nextLine(); } } scan.close(); for(int i = lc; i <= dc; i++) { dos.writeInt(0); } for(int i = dc+1; i<100; i++){ dos.writeInt(symbolTable.searchLocation(i).getValue()); if(i!=99) dos.writeInt(0); } dos.close(); os.close(); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } public int getOpcode(String inst){ int toreturn = -1; if(isInstruction(inst)){ if(inst.equals("STOP")) toreturn=0; if(inst.equals("LD")) toreturn=1; if(inst.equals("STO")) toreturn=2; if(inst.equals("ADD")) toreturn=3; if(inst.equals("SUB")) toreturn=4; if(inst.equals("MPY")) toreturn=5; if(inst.equals("DIV")) toreturn=6; if(inst.equals("IN")) toreturn=7; if(inst.equals("OUT")) toreturn=8; if(inst.equals("B")) toreturn=9; if(inst.equals("BGTR")) toreturn=10; if(inst.equals("BZ")) toreturn=11; return toreturn; } else { return -1; } } public boolean isInstruction(String totest){ boolean toreturn = false; String[] labels = {"IN", "LD", "SUB", "BGTR", "BZ", "OUT", "B", "STO", "STOP", "AD\ D", "MTY", "DIV"}; for(int i = 0; i < 12; i++){ if(totest.equals(labels[i])) toreturn = true; } return toreturn; } } And here is my class Computer: import java.io.*; import java.util.NoSuchElementException; import java.util.Scanner; class Computer{ private Cpu cpu; private Input in; private OutPut out; private Memory mem; public Computer() throws IOException { Memory mem = new Memory(100); Input in = new Input(); OutPut out = new OutPut(); Cpu cpu = new Cpu(); System.out.println(in.getInt()); } public void run() throws IOException { cpu.reset(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); while (!cpu.stop()) { cpu.decode(); if (cpu.OutFlag()) OutPut.display(mem.read(cpu.getMAR())); if (cpu.InFlag()) mem.write(cpu.getMDR(),in.getInt()); if (cpu.StoreFlag()) { mem.write(cpu.getMAR(),in.getInt()); cpu.getMDR(); } else { cpu.setMDR(mem.read(cpu.getMAR())); cpu.execute(); cpu.fetch(); cpu.setMDR(mem.read(cpu.getMAR())); cpu.fetch2(); } } } public void load() { mem.loadMemory(); } } Here is my Memory class: import java.io.*; import java.util.NoSuchElementException; import java.util.Scanner; class Memory{ private MemEl[] memArray; private int size; private int[] mem; public Memory(int s) {size = s; memArray = new MemEl[s]; for(int i = 0; i < s; i++) memArray[i] = new MemEl(); } public void write (int loc,int val) {if (loc >=0 && loc < size) memArray[loc].write(val); else System.out.println("Index Not in Domain"); } public int read (int loc) {return memArray[loc].read(); } public void dump() { for(int i = 0; i < size; i++) if(i%1 == 0) System.out.println(memArray[i].read()); else System.out.print(memArray[i].read()); } public void writeTo(int location, int value) { mem[location] = value; } public int readFrom(int location) { return mem[location]; } public int size() { return mem.length; } public void loadMemory() { this.write(0, 799); this.write(1, 798); this.write(2, 198); this.write(3, 499); this.write(4, 1008); this.write(5, 1108); this.write(6, 899); this.write(7, 909); this.write(8, 898); this.write(9, 0000); } public void loadFromFile(String filename){ try { FileReader fr = new FileReader(filename); BufferedReader br = new BufferedReader(fr); String read=null; int towrite=0; int l=0; do{ try{ read=br.readLine(); towrite = Integer.parseInt(read); }catch(Exception e){ } this.write(l, towrite); l++; }while(l<100); }catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } Here is my Test class: public class Test{ public static void main(String[] args) throws java.io.IOException { Compiler compiler = new Compiler(); compiler.compile("program.txt"); } }

    Read the article

  • How can I make the C# compiler infer these type parameters automatically?

    - by John Feminella
    I have some code that looks like the following. First I have some domain classes and some special comparators for them. public class Fruit { public int Calories { get; set; } public string Name { get; set; } } public class FruitEqualityComparer : IEqualityComparer<Fruit> { // ... } public class BasketEqualityComparer : IEqualityComparer<IEnumerable<Fruit>> { // ... } Next, I have a helper class called ConstraintChecker. It has a simple BaseEquals method that makes sure some simple base cases are considered: public static class ConstraintChecker { public static bool BaseEquals(T lhs, T rhs) { bool sameObject = l == r; bool leftNull = l == null; bool rightNull = r == null; return sameObject && !leftNull && !rightNull; } There's also a SemanticEquals method which is just a BaseEquals check and a comparator function that you specify. public static bool SemanticEquals<T>(T lhs, T rhs, Func<T, T, bool> f) { return BaseEquals(lhs, rhs) && f(lhs, rhs); } And finally there's a SemanticSequenceEquals method which accepts two IEnumerable<T> instances to compare, and an IEqualityComparer instance that will get called on each pair of elements in the list via Enumerable.SequenceEquals. public static bool SemanticSequenceEquals<T, U, V>(U lhs, U rhs, V comparator) where U : IEnumerable<T> where V : IEqualityComparer<T> { return SemanticEquals(lhs, rhs, (l, r) => lhs.SequenceEqual(rhs, comparator)); } } // end of ConstraintChecker The point of SemanticSequenceEquals is that you don't have to define two comparators whenever you want to compare both IEnumerable<T> and T instances; now you can just specify an IEqualityComparer<T> and it will also handle lists when you invoke SemanticSequenceEquals. So I could get rid of the BasketEqualityComparer class, which would be nice. But there's a problem. The C# compiler can't figure out the types involved when you invoke SemanticSequenceEquals: return ConstraintChecker.SemanticSequenceEquals(lhs, rhs, new FruitEqualityComparer()); If I specify them explicitly, it works: return ConstraintChecker.SemanticSequenceEquals< Fruit, IEnumerable<Fruit>, IEqualityComparer<Fruit> > (lhs, rhs, new FruitEqualityComparer()); What can I change here so that I don't have to write the type parameters explicitly?

    Read the article

  • When is performance gain significant enough to implement that optimization?

    - by Zwei steinen
    Hi, following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization. For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall? I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?

    Read the article

  • What do I need to know to design a language and write a interpreter for it?

    - by alFReD NSH
    I know this question has been asked and even there are thousands of books and articles about it. But the problem is that there are too many, and I don't know are they good enough, I have to design a language and write a interpreter for it. The base language is javascript (using nodejs) but it's ok if the compiler was written in another language that I can use from node. I had done a research about compiler compilers in JS, there is jison (Bison implementaion in JS), waxeye, peg.js. I decided to give jison a try, due to the popularity and its being used by coffee script, so it should be able to cover my language too. The grammar definition syntax is similar to bison. But when I tried read the bison manual it seemed very hard to understand for me. And I think it's because I don't know a lot of things about what I'm doing. Like I don't what is formal language theory. I am experienced in Javascript (I'm more talented in JS than most average programmers). And also know basic C and C++ (not much experience but can write a working code for basic things). I haven't had any formal education, so I may not be familiar with some software engineering and computer science principles. Though everyday I try to grasp a lot of articles and improve. So I'm asking if you know any good book or article that can help me. Please also write why the resource you're suggesting is good. --update-- The language I'm trying to create, is not really complicated. All it has is expressions (with or without units), comparisons and logical operators. There are no functions, loops, ... The goal is to create a language that non-programmers can easily learn. And to write customized validations and calculations.

    Read the article

  • c#, can you think of anymore optimization in this code?

    - by Sha Le
    Hi All: Look at the following code: StringBuilder row = TemplateManager.GetRow("xyz"); // no control over this method StringBuilder rows = new StringBuilder(); foreach(Record r in records) { StringBuilder thisRow = new StringBuilder(row.ToString()); thisRow.Replace("%firstName%", r.FirstName) .Replace("%lastName%", r.LastName) // all other replacement goes here .Replace("%LastModifiedDate%", r.LastModifiedDate); //finally append row to rows rows.Append(thisRow); } Currently 3 StringBuilders and row.ToString() is inside a loop. Is there any room for further optimization here? Thanks a bunch. :-)

    Read the article

  • SQL SEVER – Finding Memory Pressure – External and Internal

    - by pinaldave
    Following query will provide details of external and internal memory pressure. It will return the data how much portion in the existing memory is assigned to what kind of memory type. SELECT TYPE, SUM(single_pages_kb) InternalPressure, SUM(multi_pages_kb) ExtermalPressure FROM sys.dm_os_memory_clerks GROUP BY TYPE ORDER BY SUM(single_pages_kb) DESC, SUM(multi_pages_kb) DESC GO What is your method to find memory pressure? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – When are Statistics Updated – What triggers Statistics to Update

    - by pinaldave
    If you are an SQL Server Consultant/Trainer involved with Performance Tuning and Query Optimization, I am sure you have faced the following questions many times. When is statistics updated? What is the interval of Statistics update? What is the algorithm behind update statistics? These are the puzzling questions and more. I searched the Internet as well many official MS documents in order to find answers. All of them have provided almost similar algorithm. However, at many places, I have seen a bit of variation in algorithm as well. I have finally compiled the list of various algorithms and decided to share what was the most common “factor” in all of them. I would like to ask for your suggestions as whether following the details, when Statistics is updated, are accurate or not. I will update this blog post with accurate information after receiving your ideas. The answer I have found here is when statistics are expired and not when they are automatically updated. I need your help here to answer when they are updated. Permanent table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in a table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Temporary table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in table is less than 6, statistics is updated for every 6 changes in table. If the number of rows in table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Table variable There is no statistics for Table Variables. If you want to read further about statistics, I suggest that you read the white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008. Let me know your opinions about statistics, as well as if there is any update in the above algorithm. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • SQL SERVER – Checklist for Analyzing Slow-Running Queries

    - by pinaldave
    I am recently working on upgrading my class Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning with additional details and more interesting examples. While working on slide deck I realized that I need to have one solid slide which talks about checklist for analyzing slow running queries. A quick search on my saved [...]

    Read the article

  • Working with Temporal Data in SQL Server

    - by Dejan Sarka
    My third Pluralsight course, Working with Temporal Data in SQL Server, is published. I am really proud on the second part of the course, where I discuss optimization of temporal queries. This was a nearly impossible task for decades. First solutions appeared only lately. I present all together six solutions (and one more that is not a solution), and I invented four of them. http://pluralsight.com/training/Courses/TableOfContents/working-with-temporal-data-sql-server

    Read the article

  • Research topics for starting and optimizing a high-traffic website

    - by user745434
    I bury a good deal of my ideas for fear that I don't know enough about scaling web applications and high-traffic websites. That said, I'd like to know of any general topics to research in order to ensure that your web app doesn't break / slow down when you start getting to Twitter-level traffic. I'm looking for research topics with, preferably, additional resources. For example: Make sure you optimize SQL queries (see High Performance MySQL Optimization)

    Read the article

  • Extreme Optimization – Numerical Algorithm Support

    - by JoshReuben
    Function Delegates Many calculations involve the repeated evaluation of one or more user-supplied functions eg Numerical integration. The EO MathLib provides delegate types for common function signatures and the FunctionFactory class can generate new delegates from existing ones. RealFunction delegate - takes one Double parameter – can encapsulate most of the static methods of the System.Math class, as well as the classes in the Extreme.Mathematics.SpecialFunctions namespace: var sin = new RealFunction(Math.Sin); var result = sin(1); BivariateRealFunction delegate - takes two Double parameters: var atan2 = new BivariateRealFunction (Math.Atan2); var result = atan2(1, 2); TrivariateRealFunction delegate – represents a function takes three Double arguments ParameterizedRealFunction delegate - represents a function taking one Integer and one Double argument that returns a real number. The Pow method implements such a function, but the arguments need order re-arrangement: static double Power(int exponent, double x) { return ElementaryFunctions.Pow(x, exponent); } ... var power = new ParameterizedRealFunction(Power); var result = power(6, 3.2); A ComplexFunction delegate - represents a function that takes an Extreme.Mathematics.DoubleComplex argument and also returns a complex number. MultivariateRealFunction delegate - represents a function that takes an Extreme.Mathematics.LinearAlgebra.Vector argument and returns a real number. MultivariateVectorFunction delegate - represents a function that takes a Vector argument and returns a Vector. FastMultivariateVectorFunction delegate - represents a function that takes an input Vector argument and an output Matrix argument – avoiding object construction  The FunctionFactory class RealFromBivariateRealFunction and RealFromParameterizedRealFunction helper methods - transform BivariateRealFunction or a ParameterizedRealFunction into a RealFunction delegate by fixing one of the arguments, and treating this as a new function of a single argument. var tenthPower = FunctionFactory.RealFromParameterizedRealFunction(power, 10); var result = tenthPower(x); Note: There is no direct way to do this programmatically in C# - in F# you have partial value functions where you supply a subset of the arguments (as a travelling closure) that the function expects. When you omit arguments, F# generates a new function that holds onto/remembers the arguments you passed in and "waits" for the other parameters to be supplied. let sumVals x y = x + y     let sumX = sumVals 10     // Note: no 2nd param supplied.     // sumX is a new function generated from partially applied sumVals.     // ie "sumX is a partial application of sumVals." let sum = sumX 20     // Invokes sumX, passing in expected int (parameter y from original)  val sumVals : int -> int -> int val sumX : (int -> int) val sum : int = 30 RealFunctionsToVectorFunction and RealFunctionsToFastVectorFunction helper methods - combines an array of delegates returning a real number or a vector into vector or matrix functions. The resulting vector function returns a vector whose components are the function values of the delegates in the array. var funcVector = FunctionFactory.RealFunctionsToVectorFunction(     new MultivariateRealFunction(myFunc1),     new MultivariateRealFunction(myFunc2));  The IterativeAlgorithm<T> abstract base class Iterative algorithms are common in numerical computing - a method is executed repeatedly until a certain condition is reached, approximating the result of a calculation with increasing accuracy until a certain threshold is reached. If the desired accuracy is achieved, the algorithm is said to converge. This base class is derived by many classes in the Extreme.Mathematics.EquationSolvers and Extreme.Mathematics.Optimization namespaces, as well as the ManagedIterativeAlgorithm class which contains a driver method that manages the iteration process.  The ConvergenceTest abstract base class This class is used to specify algorithm Termination , convergence and results - calculates an estimate for the error, and signals termination of the algorithm when the error is below a specified tolerance. Termination Criteria - specify the success condition as the difference between some quantity and its actual value is within a certain tolerance – 2 ways: absolute error - difference between the result and the actual value. relative error is the difference between the result and the actual value relative to the size of the result. Tolerance property - specify trade-off between accuracy and execution time. The lower the tolerance, the longer it will take for the algorithm to obtain a result within that tolerance. Most algorithms in the EO NumLib have a default value of MachineConstants.SqrtEpsilon - gives slightly less than 8 digits of accuracy. ConvergenceCriterion property - specify under what condition the algorithm is assumed to converge. Using the ConvergenceCriterion enum: WithinAbsoluteTolerance / WithinRelativeTolerance / WithinAnyTolerance / NumberOfIterations Active property - selectively ignore certain convergence tests Error property - returns the estimated error after a run MaxIterations / MaxEvaluations properties - Other Termination Criteria - If the algorithm cannot achieve the desired accuracy, the algorithm still has to end – according to an absolute boundary. Status property - indicates how the algorithm terminated - the AlgorithmStatus enum values:NoResult / Busy / Converged (ended normally - The desired accuracy has been achieved) / IterationLimitExceeded / EvaluationLimitExceeded / RoundOffError / BadFunction / Divergent / ConvergedToFalseSolution. After the iteration terminates, the Status should be inspected to verify that the algorithm terminated normally. Alternatively, you can set the ThrowExceptionOnFailure to true. Result property - returns the result of the algorithm. This property contains the best available estimate, even if the desired accuracy was not obtained. IterationsNeeded / EvaluationsNeeded properties - returns the number of iterations required to obtain the result, number of function evaluations.  Concrete Types of Convergence Test classes SimpleConvergenceTest class - test if a value is close to zero or very small compared to another value. VectorConvergenceTest class - test convergence of vectors. This class has two additional properties. The Norm property specifies which norm is to be used when calculating the size of the vector - the VectorConvergenceNorm enum values: EuclidianNorm / Maximum / SumOfAbsoluteValues. The ErrorMeasure property specifies how the error is to be measured – VectorConvergenceErrorMeasure enum values: Norm / Componentwise ConvergenceTestCollection class - represent a combination of tests. The Quantifier property is a ConvergenceTestQuantifier enum that specifies how the tests in the collection are to be combined: Any / All  The AlgorithmHelper Class inherits from IterativeAlgorithm<T> and exposes two methods for convergence testing. IsValueWithinTolerance<T> method - determines whether a value is close to another value to within an algorithm's requested tolerance. IsIntervalWithinTolerance<T> method - determines whether an interval is within an algorithm's requested tolerance.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >