Search Results

Search found 23939 results on 958 pages for 'block size'.

Page 135/958 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Delphi Exception handling problem with multiple Exception handling blocks

    - by Robert Oschler
    I'm using Delphi Pro 6 on Windows XP with FastMM 4.92 and the JEDI JVCL 3.0. Given the code below, I'm having the following problem: only the first exception handling block gets a valid instance of E. The other blocks match properly with the class of the Exception being raised, but E is unassigned (nil). For example, given the current order of the exception handling blocks when I raise an E1 the block for E1 matches and E is a valid object instance. However, if I try to raise an E2, that block does match, but E is unassigned (nil). If I move the E2 catching block to the top of the ordering and raise an E1, then when the E1 block matches E is is now unassigned. With this new ordering if I raise an E2, E is properly assigned when it wasn't when the E2 block was not the first block in the ordering. Note I tried this case with a bare-bones project consisting of just a single Delphi form. Am I doing something really silly here or is something really wrong? Thanks, Robert type E1 = class(EAbort) end; E2 = class(EAbort) end; procedure TForm1.Button1Click(Sender: TObject); begin try raise E1.Create('hello'); except On E: E1 do begin OutputDebugString('E1'); end; On E: E2 do begin OutputDebugString('E2'); end; On E: Exception do begin OutputDebugString('E(all)'); end; end; // try() end;

    Read the article

  • Delph Exception handling problem with multiple Exception handling blocks

    - by Robert Oschler
    I'm using Delphi Pro 6 on Windows XP with FastMM 4.92 and the JEDI JVCL 3.0. Given the code below, I'm having the following problem: only the first exception handling block gets a valid instance of E. The other blocks match properly with the class of the Exception being raised, but E is unassigned (nil). For example, given the current order of the exception handling blocks when I raise an E1 the block for E1 matches and E is a valid object instance. However, if I try to raise an E2, that block does match, but E is unassigned (nil). If I move the E2 catching block to the top of the ordering and raise an E1, then when the E1 block matches E is is now unassigned. With this new ordering if I raise an E2, E is properly assigned when it wasn't when the E2 block was not the first block in the ordering. Note I tried this case with a bare-bones project consisting of just a single Delphi form. Am I doing something really silly here or is something really wrong? Thanks, Robert type E1 = class(EAbort) end; E2 = class(EAbort) end; procedure TForm1.Button1Click(Sender: TObject); begin try raise E1.Create('hello'); except On E: E1 do begin OutputDebugString('E1'); end; On E: E2 do begin OutputDebugString('E2'); end; On E: Exception do begin OutputDebugString('E(all)'); end; end; // try() end;

    Read the article

  • How do DP and CC change in Piet?

    - by Paul Butcher
    According to the specification, Black colour blocks and the edges of the program restrict program flow. If the Piet interpreter attempts to move into a black block or off an edge, it is stopped and the CC is toggled. The interpreter then attempts to move from its current block again. If it fails a second time, the DP is moved clockwise one step. These attempts are repeated, with the CC and DP being changed between alternate attempts. If after eight attempts the interpreter cannot leave its current colour block, there is no way out and the program terminates. Unless I'm reading it incorrectly, this is at odds with the behaviour of the Fibonacci sequence example here: http://www.dangermouse.net/esoteric/piet/fibbig1.gif (from: http://www.dangermouse.net/esoteric/piet/samples.html) Specifically, why does the DP turn left at (0,3) ((0,0) being (top, left)) when it hits the left edge? At this point, both DP and CC are LEFT, so, by my reading, the sequence should then be: Attempt (and fail) to leave the block by going off the edge at (0,4), Toggle CC to RIGHT, Attempt (and fail) to leave the block by going off the edge at (0,2). Rotate DP to UP, Attempt (and succeed) to leave the block at (1,2) by entering the white block at (1,1) The behaviour indicated by the trace seems to be that DP gets rotated all the way, leaving CC at LEFT. What have I misunderstood?

    Read the article

  • What is weird about wrapping setjmp and longjmp?

    - by Max
    Hello. I am using setjmp and longjmp for the first time, and I ran across an issue that comes about when I wrap setjmp and longjmp. I boiled the code down to the following example: #include <stdio.h> #include <setjmp.h> jmp_buf jb; int mywrap_save() { int i = setjmp(jb); return i; } int mywrap_call() { longjmp(jb, 1); printf("this shouldn't appear\n"); } void example_wrap() { if (mywrap_save() == 0){ printf("wrap: try block\n"); mywrap_call(); } else { printf("wrap: catch block\n"); } } void example_non_wrap() { if (setjmp(jb) == 0){ printf("non_wrap: try block\n"); longjmp(jb, 1); } else { printf("non_wrap: catch block\n"); } } int main() { example_wrap(); example_non_wrap(); } Initially I thought example_wrap() and example_non_wrap() would behave the same. However, the result of running the program (GCC 4.4, Linux): wrap: try block non_wrap: try block non_wrap: catch block If I trace the program in gdb, I see that even though mywrap_save() returns 1, the else branch after returning is oddly ignored. Can anyone explain what is going on?

    Read the article

  • Running Hadoop example in psuedo-distributed mode on vm

    - by manas
    I have set-up Hadoop on a OpenSuse 11.2 VM using Virtualbox.I have made the prerequisite configs. I ran this example in the Standalone mode successfully. But in psuedo-distributed mode I get the following error: $./bin/hadoop fs -put conf input 10/04/13 15:56:25 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:25 INFO hdfs.DFSClient: Abandoning block blk_-8490915989783733314_1003 10/04/13 15:56:31 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:31 INFO hdfs.DFSClient: Abandoning block blk_-1740343312313498323_1003 10/04/13 15:56:37 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:37 INFO hdfs.DFSClient: Abandoning block blk_-3566235190507929459_1003 10/04/13 15:56:43 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available 10/04/13 15:56:43 INFO hdfs.DFSClient: Abandoning block blk_-1746222418910980888_1003 10/04/13 15:56:49 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) 10/04/13 15:56:49 WARN hdfs.DFSClient: Error Recovery for block blk_-1746222418910980888_1003 bad datanode[0] nodes == null 10/04/13 15:56:49 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... put: Protocol not available 10/04/13 15:56:49 ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) Any leads will be highly appreciated.

    Read the article

  • Java split xml file

    - by CC
    Hi all, I'm working on a piece of code to split files. I want to split flat file (that's ok, it is working fine) and xml file. The idea is to split based of a number of files to split: I have a file, and I want to split it in x files (x is a parameters). I'm doing the split by taking the size of the file and spliting the size by the number of files to split. Then, mysolution was to use a BufferedReader and to use it like while ((n = reader.read(buffer, 0, buffer.length)) != -1) { { The main problem is that for the xml file I cannot just split it, but I have to split it based on a block delimited by a start xml tag and end xml tag: <start tag> bla bla xml stuff </end tag> So I cannot cut a block at the middle. So if when I'm at the half of a block, is the size of my new file is greater than my max, I will have to read until the end of the tag, and then, to start a next file. The problem is that I have all sort of cases, and is a bit difficult to search the end tag. - the block reads a text until the middle of the end tag - the block reads a text until the end of the end tag, and no more other caracter after - etc and in the same time to have a loop and read the next block. Some times the end of a block concatenated with the start of the next one, I have the end xml tag. I hope you get the idea. My question is, does anyone have some algorithm that does that more accurate and who i treating all special cases ? The idea is to split the file as quickly as possible. Thanks alot.

    Read the article

  • Reuse Client java Socket in a Java Server

    - by user1394983
    I'm devoloping an Java server two control an android online game. It's possible save the client socket of myserversocket.accept() in a variable in Client class? This are very util because this way, server can communicate with client when server wants and no when client contact server. My actual code are: import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.net.ServerSocket; import java.net.Socket; import java.util.ArrayList; import java.util.UUID; import sal.app.shared.Packet; public class Server { private ArrayList<GameSession> games = new ArrayList<GameSession>(); private ArrayList<Client> pendent_clients = new ArrayList<Client>(); private Packet read_packet= new Packet(); private Packet sent_packet = new Packet(); private Socket clientSocket = null; public static void main(String[] args) throws ClassNotFoundException{ ServerSocket serverSocket = null; //DataInputStream dataInputStream = null; //DataOutputStream dataOutputStream = null; ObjectOutputStream oos=null; ObjectInputStream ois=null; Server myServer = new Server(); try { serverSocket = new ServerSocket(7777); System.out.println("Listening :7777"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } while(true){ try { myServer.clientSocket = new Socket(); myServer.clientSocket = serverSocket.accept(); myServer.read_packet = new Packet(); myServer.sent_packet = new Packet(); oos = new ObjectOutputStream(myServer.clientSocket.getOutputStream()); ois = new ObjectInputStream(myServer.clientSocket.getInputStream()); //dataInputStream = new DataInputStream(clientSocket.getInputStream()); //dataOutputStream = new DataOutputStream(clientSocket.getOutputStream()); //System.out.println("ip: " + clientSocket.getInetAddress()); //System.out.println("message: " + ois.read()); //dataOutputStream.writeUTF("Hello!"); /*while ((myServer.read_packet = (Packet) ois.readObject()) != null) { myServer.handlePacket(myServer.read_packet); break; }*/ myServer.read_packet=(Packet) ois.readObject(); myServer.handlePacket(myServer.read_packet); //oos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } finally{ if( myServer.clientSocket!= null){ /*try { //myServer.clientSocket.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ } /*if( ois!= null){ try { ois.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if( oos!= null){ try { oos.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } }*/ } } } public void handlePacket(Packet hp) throws IOException { if(hp.getOpCode() == 1) { registPlayer(hp); } } public void registPlayer(Packet p) throws IOException { Client registClient = new Client(this.clientSocket); this.pendent_clients.add(registClient); if(pendent_clients.size() == 2) { initAGame(); } else { ObjectOutputStream out=null; Packet to_send = new Packet(); to_send.setOpCode(4); out = new ObjectOutputStream(registClient.getClientSocket().getOutputStream()); out.writeObject(to_send); } } public void initAGame() throws IOException { Client c1 = pendent_clients.get(0); Client c2 = pendent_clients.get(1); Packet to_send = new Packet(); ObjectOutputStream out=null; GameSession incomingGame = new GameSession(c1,c2); games.add(incomingGame); to_send.setGameId(incomingGame.getGameId()); to_send.setOpCode(5); out = new ObjectOutputStream(c1.getClientSocket().getOutputStream()); out.writeObject(to_send); out = new ObjectOutputStream(c2.getClientSocket().getOutputStream()); out.writeObject(to_send); pendent_clients.clear(); } public Client getClientById(UUID given_id) { for(GameSession gs: games) { if(gs.getClient1().getClientId().equals(given_id)) { return gs.getClient1(); } else if(gs.getClient2().getClientId().equals(given_id)) { return gs.getClient2(); } } return null; } } With this code i got this erros: java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1847) at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1756) at java.io.ObjectOutputStream.writeNonProxyDesc(ObjectOutputStream.java:1257) at java.io.ObjectOutputStream.writeClassDesc(ObjectOutputStream.java:1211) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1395) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1158) at java.io.ObjectOutputStream.writeFatalException(ObjectOutputStream.java:1547) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:333) at Server.initAGame(Server.java:146) at Server.registPlayer(Server.java:120) at Server.handlePacket(Server.java:106) at Server.main(Server.java:63) This error ocurre when second client connect and server try to send an Packet to previous client 1 in function initGame() in this code: out = new ObjectOutputStream(c1.getClientSocket().getOutputStream()); out.writeObject(to_send); my android code is this: package sal.app; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; import java.io.ObjectInputStream; import java.io.ObjectOutputStream; import java.net.Socket; import java.net.UnknownHostException; import sal.app.logic.DataBaseManager; import sal.app.shared.Packet; import android.app.Activity; import android.os.Bundle; import android.view.Window; import android.view.WindowManager; public class MultiPlayerWaitActivity extends Activity{ private DataBaseManager db; public void onCreate(Bundle savedInstanceState) { super.requestWindowFeature(Window.FEATURE_NO_TITLE); super.getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,WindowManager.LayoutParams.FLAG_FULLSCREEN); super.onCreate(savedInstanceState); setContentView(R.layout.multiwaitlayout); db=DataBaseManager.getSalDatabase(this); db.teste(); try { db.createDataBase(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } Socket socket = null; ObjectOutputStream outputStream = null; ObjectInputStream inputStream = null; //System.out.println("dadadad"); try { socket = new Socket("192.168.1.4", 7777); //Game = new MultiPlayerGame(new ServerManager("192.168.1.66"),new Session(), new Player("")); outputStream = new ObjectOutputStream(socket.getOutputStream()); inputStream = new ObjectInputStream(socket.getInputStream()); //dataOutputStream.writeUTF(textOut.getText().toString()); //textIn.setText(dataInputStream.readUTF()); Packet p = new Packet(); Packet r = new Packet(); p.setOpCode(1); outputStream.writeObject(p); /*try { r=(Packet)inputStream.readObject(); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); }*/ //while(true){ //dataInputStream = new DataInputStream(clientSocket.getInputStream()); //dataOutputStream = new DataOutputStream(clientSocket.getOutputStream()); //System.out.println("ip: " + clientSocket.getInetAddress()); //System.out.println("message: " + ois.read()); //dataOutputStream.writeUTF("Hello!"); /*while ((r= (Packet) inputStream.readObject()) != null) { handPacket(r); break; }*/ r=(Packet) inputStream.readObject(); handPacket(r); //oos.close(); //} /*System.out.println(r.getOpCode()); if(r.getOpCode() == 5) { this.finish(); }*/ } catch (UnknownHostException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } /*finally{ if (socket != null){ try { socket.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if (outputStream != null){ try { outputStream.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } if (inputStream != null){ try { inputStream.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }*/ //catch (ClassNotFoundException e) { // TODO Auto-generated catch block //e.printStackTrace(); //} catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } } public void handPacket(Packet hp) { if(hp.getOpCode() == 5) { this.finish(); } this.finish(); } } Regards

    Read the article

  • How to create custom pages (Drupal 6.x)

    - by jc70
    In the template.php file I inserted the code below: I found a tutorial online that gives the code, but I'm confused on how to get it to work. I copied the code below and inserted it into the template.php from the theme HTML5_base. I duplicated the page.tpl.php file and created custom pages -- page-gallery.tpl.php and page-articles.tpl.php. I inserted some text to the files just see that I've navigated to the pages w/ the changes. It looks like Drupal is not recognizing gallery.tpl.php and page-articles.tpl.php. In the template.php there are the following functions: html5_base_preprocess_page() html5_base_preprocess_node() html5_base_preprocess_block() In the tutorial it uses these functions: phptemplate_preprocess_page() phptemplate_preprocess_block() phptemplate_preprocess_node() function phptemplate_preprocess_page(&$vars) { //code block from the Drupal handbook //the path module is required and must be activated if(module_exists('path')) { //gets the "clean" URL of the current page $alias = drupal_get_path_alias($_GET['q']); $suggestions = array(); $template_filename = 'page'; foreach(explode('/', $alias) as $path_part) { $template_filename = $template_filename.'-'.$path_part; $suggestions[] = $template_filename; } $vars['template_files'] = $suggestions; } } function phptemplate_preprocess_node(&$vars) { //default template suggestions for all nodes $vars['template_files'] = array(); $vars['template_files'][] = 'node'; //individual node being displayed if($vars['page']) { $vars['template_files'][] = 'node-page'; $vars['template_files'][] = 'node-'.$vars['node']->type.'-page'; $vars['template_files'][] = 'node-'.$vars['node']->nid.'-page'; } //multiple nodes being displayed on one page in either teaser //or full view else { //template suggestions for nodes in general $vars['template_files'][] = 'node-'.$vars['node']->type; $vars['template_files'][] = 'node-'.$vars['node']->nid; //template suggestions for nodes in teaser view //more granular control if($vars['teaser']) { $vars['template_files'][] = 'node-'.$vars['node']->type.'-teaser'; $vars['template_files'][] = 'node-'.$vars['node']->nid.'-teaser'; } } } function phptemplate_preprocess_block(&$vars) { //the "cleaned-up" block title to be used for suggestion file name $subject = str_replace(" ", "-", strtolower($vars['block']->subject)); $vars['template_files'] = array('block', 'block-'.$vars['block']->delta, 'block-'.$subject); }

    Read the article

  • Issue with transparent texture on 3D primitive, XNA 4.0

    - by Bevin
    I need to draw a large set of cubes, all with (possibly) unique textures on each side. Some of the textures also have parts of transparency. The cubes that are behind ones with transparent textures should show through the transparent texture. However, it seems that the order in which I draw the cubes decides if the transparency works or not, which is something I want to avoid. Look here: cubeEffect.CurrentTechnique = cubeEffect.Techniques["Textured"]; Block[] cubes = new Block[4]; cubes[0] = new Block(BlockType.leaves, new Vector3(0, 0, 3)); cubes[1] = new Block(BlockType.dirt, new Vector3(0, 1, 3)); cubes[2] = new Block(BlockType.log, new Vector3(0, 0, 4)); cubes[3] = new Block(BlockType.gold, new Vector3(0, 1, 4)); foreach(Block b in cubes) { b.shape.RenderShape(GraphicsDevice, cubeEffect); } This is the code in the Draw method. It produces this result: As you can see, the textures behind the leaf cube are not visible on the other side. When i reverse index 3 and 0 on in the array, I get this: It is clear that the order of drawing is affecting the cubes. I suspect it may have to do with the blend mode, but I have no idea where to start with that.

    Read the article

  • iPhone SDK: TextView, Keyboard in Landscape mode

    - by Arnold
    Hello. How do I make sure that the textview is shown and the keyboard is not obscuring the textview, while in landscape. Using UICatalog I created a TextViewController which works. In it there are two methods for calling the keyboard and making sure that textView is positioned above the keyboard. his just works great in Portrait mode. I got the Landscape mode working, but on the textView is still being put to the top of the iPhone to compensate for the keyboard in portrait mode. I changed the methods for showing the keyboards. Below is the code for this methods: (I will just let see the code for show, since the hide code will be the reverse.. - (void)keyboardWillShow:(NSNotification *)aNotification { UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation]; if (orientation == UIInterfaceOrientationPortrait) { // the keyboard is showing so resize the table's height CGRect keyboardRect = [[[aNotification userInfo] objectForKey:UIKeyboardBoundsUserInfoKey] CGRectValue]; NSTimeInterval animationDuration = [[[aNotification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]; CGRect frame = self.view.frame; frame.size.height -= keyboardRect.size.height; [UIView beginAnimations:@"ResizeForKeyboard" context:nil]; [UIView setAnimationDuration:animationDuration]; self.view.frame = frame; [UIView commitAnimations]; } else if (orientation == UIInterfaceOrientationLandscapeLeft) { NSLog(@"Left"); // Verijderen later CGRect keyboardRect = [[[aNotification userInfo] objectForKey:UIKeyboardBoundsUserInfoKey] CGRectValue]; NSTimeInterval animationDuration = [[[aNotification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]; CGRect frame = self.view.frame; frame.size.width -= keyboardRect.size.height; [UIView beginAnimations:@"ResizeForKeyboard" context:nil]; [UIView setAnimationDuration:animationDuration]; self.view.frame = frame; [UIView commitAnimations]; } else if (orientation == UIInterfaceOrientationLandscapeRight){ NSLog(@"Right"); // verwijderen later. CGRect keyboardRect = [[[aNotification userInfo] objectForKey:UIKeyboardBoundsUserInfoKey] CGRectValue]; NSTimeInterval animationDuration = [[[aNotification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]; CGRect frame = self.view.frame; frame.size.width -= keyboardRect.size.width; [UIView beginAnimations:@"ResizeForKeyboard" context:nil]; [UIView setAnimationDuration:animationDuration]; self.view.frame = frame; [UIView commitAnimations]; } } I know that I have to change the line frame.size.height -= keyboardRect.size.height but I do not seem to get it working. I tried frame.size.width -= keyboardRect.size.height that did not work. Losing the keyboardRect and frame all together work, however off course the keyboard obscures the textview........

    Read the article

  • subprocess.Popen doesn't work when args is sequence

    - by pero
    I'm having a problem with subprocess.Popen when args parameter is given as sequence. For example: import subprocess maildir = "/home/support/Maildir" This works (it prints the correct size of /home/support/Maildir dir): size = subprocess.Popen(["du -s -b " + maildir], shell=True, stdout=subprocess.PIPE).communicate()[0].split()[0] print size But, this doesn't work (try it): size = subprocess.Popen(["du", "-s -b", maildir], shell=True, stdout=subprocess.PIPE).communicate()[0].split()[0] print size What's wrong?

    Read the article

  • for my project I have problem in report

    - by pink rose
    public stack(int size) { this.size=size; Array = new int[size]; } public void push(int j) { if (top < size) { Array[++top] = j; } } public int pop() { return Array[top--]; } public int top() { return Array[top]; } public boolean isEmpty() { return (top == -1); } } import javax.swing.JOptionPane; public class menu { private static stack s; private static int numbers[]; public static void main(String args[]) { start(); } public static void start() { int i = Integer.parseInt(JOptionPane.showInputDialog("1. size of array\n2. data entry\n3. display content\n4. exit")); switch (i) { case 1: setSize(); break; case 2: addElement(); break; case 3: showElements(); break; case 4: exit(); } } public static void setSize() { int size = Integer.parseInt(JOptionPane.showInputDialog("Please Enter The Size")); s = new stack(size); numbers = new int[10]; start(); } public static void addElement() { for(int x=0;x<s.size;x++) { int e = Integer.parseInt(JOptionPane.showInputDialog("Please Enter The Element")); numbers[e]++; s.push(e); } start(); } public static void showElements() { String result = ""; int temp; while (!s.isEmpty()) { temp = s.pop(); if (numbers[temp] == 1) { result = temp+result; } } JOptionPane.showMessageDialog(null, result); start(); } public static void exit() { System.exit(0); } } This my project I was finished but I have problem in question in my report Conclusion. It should summarize the state of your project and indicate which part of your project is working and which part is not working or with limitations. You may also provide your suggestions and comments to this project what I can answer I didn't have any idea

    Read the article

  • How to quickly acquire and process real time screen output

    - by Akusete
    I am trying to write a program to play a full screen PC game for fun (as an experiment in Computer Vision and Artificial Intelligence). For this experiment I am assuming the game has no underlying API for AI players (nor is the source available) so I intend to process the visual information rendered by the game on the screen. The game runs in full screen mode on a win32 system (direct-X I assume). Currently I am using the win32 functions #include <windows.h> #include <cvaux.h> class Screen { public: HWND windowHandle; HDC windowContext; HBITMAP buffer; HDC bufferContext; CvSize size; uchar* bytes; int channels; Screen () { windowHandle = GetDesktopWindow(); windowContext = GetWindowDC (windowHandle); size = cvSize (GetDeviceCaps (windowContext, HORZRES), GetDeviceCaps (windowContext, VERTRES)); buffer = CreateCompatibleBitmap (windowContext, size.width, size.height); bufferContext = CreateCompatibleDC (windowContext); SelectObject (bufferContext, buffer); channels = 4; bytes = new uchar[size.width * size.height * channels]; } ~Screen () { ReleaseDC(windowHandle, windowContext); DeleteDC(bufferContext); DeleteObject(buffer); delete[] bytes; } void CaptureScreen (IplImage* img) { BitBlt(bufferContext, 0, 0, size.width, size.height, windowContext, 0, 0, SRCCOPY); int n = size.width * size.height; int imgChannels = img->nChannels; GetBitmapBits (buffer, n * channels, bytes); uchar* src = bytes; uchar* dest = (uchar*) img->imageData; uchar* end = dest + n * imgChannels; while (dest < end) { dest[0] = src[0]; dest[1] = src[1]; dest[2] = src[2]; dest += imgChannels; src += channels; } } The rate at which I can process frames using this approach is much to slow. Is there a better way to acquire screen frames?

    Read the article

  • How do i resize image file to optional sizes

    - by shuxer
    Hi I have image upload form, user attaches aimage file, and selects image size to resize the uploaded image file(200kb, 500kb, 1mb, 5mb, Original). Then my script needs to resize image file size based on user's optional size, but im not sure how to implement this feature, For example, user uploads image with one 1mb size, and if user selects 200KB to resize, then my script should save it with 200kb size. Does anyone know or have an experience on similar task ? Thanks for you reply in advance.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • Inline-SVG not rendering when generated by JS

    - by Lucas Gasenzer
    I want to implement some visual statistics into a jQuery mobile page. If I embed the folowing snippet it will show me the same results as if I would embed it from a separate *.svg-file. <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" height="115" width="100%"> <rect x="0%" y="0" fill="#8cc63f" width="19.2%" height="100" /> <text x="10%" y="115" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">A</text> <text x="10%" y="15" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">100</text> <rect x="20.2%" y="50" fill="#8cc63f" width="19.2%" height="50" /> <text x="30.2%" y="115" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">B</text> <text x="30.2%" y="65" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">50</text> <rect x="40.4%" y="90" fill="#8cc63f" width="19.2%" height="10" /> <text x="50.4%" y="115" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">C</text> <text x="50.4%" y="85" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">10</text> <rect x="60.6%" y="78" fill="#8cc63f" width="19.2%" height="22" /> <text x="70.6%" y="115" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">D</text> <text x="70.6%" y="73" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">22</text> <rect x="80.8%" y="40" fill="#8cc63f" width="19.2%" height="60" /> <text x="90.8%" y="115" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">E</text> <text x="90.8%" y="55" font-family="helvetica, sans-serif" font-size="10" style="text-anchor:middle;">60</text> Now because these statistics obviously change for each site I generate code like the one above using JavaScript. The HTML-Source-Code looks the same but the SVG will not be showing. Instead it looks like this: A 100 B 50 C 10 D 22 E60 so really just a line of text Am I missing something? Thank you for your help!

    Read the article

  • keyboard hiding my textview

    - by Risma
    hi guys i have a simple app, it consist of 2 textview, 1 uiview as a coretext subclass, and then 1 scrollview. the others part is subviews from scrollview. I use this scrollview because i need to scroll the textviews and uiview at the same time. I already scroll all of them together, but the problem is, the keyboard hiding some lines in the textview. I have to change the frame of scrollview when keyboard appear, but it still not help. This is my code : UIScrollView *scrollView; UIView *viewTextView; UITextView *lineNumberTextView; UITextView *codeTextView; -(void) viewWillAppear:(BOOL)animated{ [super viewWillAppear:animated]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWillAppear:) name:UIKeyboardWillShowNotification object:nil]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(keyboardWillDisappear:) name:UIKeyboardWillHideNotification object:nil]; self.scrollView.frame = CGRectMake(0, 88, self.codeTextView.frame.size.width, self.codeTextView.frame.size.height); scrollView.contentSize = CGSizeMake(self.view.frame.size.width, viewTextView.frame.size.height); [scrollView addSubview:viewTextView]; CGAffineTransform translationCoreText = CGAffineTransformMakeTranslation(60, 7); [viewTextView setTransform:translationCoreText]; [scrollView addSubview:lineNumberTextView]; [self.scrollView setScrollEnabled:YES]; [self.codeTextView setScrollEnabled:NO]; } -(void)keyboardWillAppear:(NSNotification *)notification { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:[[[notification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]]; CGRect keyboardEndingUncorrectedFrame = [[[notification userInfo] objectForKey:UIKeyboardFrameEndUserInfoKey ] CGRectValue]; CGRect keyboardEndingFrame = [self.view convertRect:keyboardEndingUncorrectedFrame fromView:nil]; self.scrollView.frame = CGRectMake(0, 88, self.codeTextView.frame.size.width, self.codeTextView.frame.size.height - keyboardEndingFrame.size.height); [UIView commitAnimations]; } -(void)keyboardWillDisappear:(NSNotification *) notification { [UIView beginAnimations:nil context:NULL]; [UIView setAnimationDuration:[[[notification userInfo] objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue]]; CGRect keyboardEndingUncorrectedFrame = [[[notification userInfo] objectForKey:UIKeyboardFrameEndUserInfoKey] CGRectValue]; CGRect keyboardEndingFrame = [self.view convertRect:keyboardEndingUncorrectedFrame fromView:nil]; self.scrollView.frame = CGRectMake(0, 88, self.codeTextView.frame.size.width, self.codeTextView.frame.size.height + keyboardEndingFrame.size.height); [UIView commitAnimations]; } can somebody help me please?

    Read the article

  • OpenGL Shader Compile Error

    - by Tomas Cokis
    I'm having a bit of a problem with my code for compiling shaders, namely they both register as failed compiles and no log is received. This is the shader compiling code: /* Make the shader */ Uint size; GLchar* file; loadFileRaw(filePath, file, &size); const char * pFile = file; const GLint pSize = size; newCashe.shader = glCreateShader(shaderType); glShaderSource(newCashe.shader, 1, &pFile, &pSize); glCompileShader(newCashe.shader); GLint shaderCompiled; glGetShaderiv(newCashe.shader, GL_COMPILE_STATUS, &shaderCompiled); if(shaderCompiled == GL_FALSE) { ReportFiler->makeReport("ShaderCasher.cpp", "loadShader()", "Shader did not compile", "The shader " + filePath + " failed to compile, reporting the error - " + OpenGLServices::getShaderLog(newCashe.shader)); } And these are the support functions: bool loadFileRaw(string fileName, char* data, Uint* size) { if (fileName != "") { FILE *file = fopen(fileName.c_str(), "rt"); if (file != NULL) { fseek(file, 0, SEEK_END); *size = ftell(file); rewind(file); if (*size > 0) { data = (char*)malloc(sizeof(char) * (*size + 1)); *size = fread(data, sizeof(char), *size, file); data[*size] = '\0'; } fclose(file); } } return data; } string OpenGLServices::getShaderLog(GLuint obj) { int infologLength = 0; int charsWritten = 0; char *infoLog; glGetShaderiv(obj, GL_INFO_LOG_LENGTH,&infologLength); if (infologLength > 0) { infoLog = (char *)malloc(infologLength); glGetShaderInfoLog(obj, infologLength, &charsWritten, infoLog); string log = infoLog; free(infoLog); return log; } return "<Blank Log>"; } and the shaders I'm loading: void main(void) { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } void main(void) { gl_Position = ftransform(); } In short I get From: ShaderCasher.cpp, In: loadShader(), Subject: Shader did not compile Message: The shader Data/Shaders/Standard/standard.vs failed to compile, reporting the error - <Blank Log> for every shader I compile I've tried replacing the file reading with just a hard coded string but I get the same error so there must be something wrong with how I'm compiling them. I have run and compiled example programs with shaders, so I doubt my drivers are the issue, but in any case I'm on a Nvidia 8600m GT. Can anyone help?

    Read the article

  • overloading new/delete problem

    - by hidayat
    This is my scenario, Im trying to overload new and delete globally. I have written my allocator class in a file called allocator.h. And what I am trying to achieve is that if a file is including this header file, my version of new and delete should be used. So in a header file "allocator.h" i have declared the two functions extern void* operator new(std::size_t size); extern void operator delete(void *p, std::size_t size); I the same header file I have a class that does all the allocator stuff, class SmallObjAllocator { ... }; I want to call this class from the new and delete functions and I would like the class to be static, so I have done this: template<unsigned dummy> struct My_SmallObjectAllocatorImpl { static SmallObjAllocator myAlloc; }; template<unsigned dummy> SmallObjAllocator My_SmallObjectAllocatorImpl<dummy>::myAlloc(DEFAULT_CHUNK_SIZE, MAX_OBJ_SIZE); typedef My_SmallObjectAllocatorImpl<0> My_SmallObjectAllocator; and in the cpp file it looks like this: allocator.cc void* operator new(std::size_t size) { std::cout << "using my new" << std::endl; if(size > MAX_OBJ_SIZE) return malloc(size); else return My_SmallObjectAllocator::myAlloc.allocate(size); } void operator delete(void *p, std::size_t size) { if(size > MAX_OBJ_SIZE) free(p); else My_SmallObjectAllocator::myAlloc.deallocate(p, size); } The problem is when I try to call the constructor for the class SmallObjAllocator which is a static object. For some reason the compiler are calling my overloaded function new when initializing it. So it then tries to use My_SmallObjectAllocator::myAlloc.deallocate(p, size); which is not defined so the program crashes. So why are the compiler calling new when I define a static object? and how can I solve it?

    Read the article

  • C++ Vector vs Array (Time)

    - by vsha041
    I have got here two programs with me, both are doing exactly the same task. They are just setting an boolean array / vector to the value true. The program using vector takes 27 seconds to run whereas the program involving array with 5 times greater size takes less than 1 s. I would like to know the exact reason as to why there is such a major difference ? Are vectors really that inefficient ? Program using vectors #include <iostream> #include <vector> #include <ctime> using namespace std; int main(){ const int size = 2000; time_t start, end; time(&start); vector<bool> v(size); for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - 27 seconds Program using Array #include <iostream> #include <ctime> using namespace std; int main(){ const int size = 10000; // 5 times more size time_t start, end; time(&start); bool v[size]; for(int i = 0; i < size; i++){ for(int j = 0; j < size; j++){ v[i] = true; } } time(&end); cout<<difftime(end, start)<<" seconds."<<endl; } Runtime - < 1 seconds Platform - Visual Studio 2008 OS - Windows Vista 32 bit SP 1 Processor Intel(R) Pentium(R) Dual CPU T2370 @ 1.73GHz Memory (RAM) 1.00 GB Thanks Amare

    Read the article

  • Voxel terrain rendering with marching cubes

    - by JavaJosh94
    I was working on making procedurally generated terrain using normal cubish voxels (like minecraft) But then I read about marching cubes and decided to convert to using those. I managed to create a working marching cubes class and cycle through the densities and everything in it seemed to be working so I went on to work on actual terrain generation. I'm using XNA (C#) and a ported libnoise library to generate noise for the terrain generator. But instead of rendering smooth terrain I get a 64x64 chunk (I specified 64 but can change it) of seemingly random marching cubes using different triangles. This is the code I'm using to generate a "chunk": public MarchingCube[, ,] getTerrainChunk(int size, float dMultiplyer, int stepsize) { MarchingCube[, ,] temp = new MarchingCube[size / stepsize, size / stepsize, size / stepsize]; for (int x = 0; x < size; x += stepsize) { for (int y = 0; y <size; y += stepsize) { for (int z = 0; z < size; z += stepsize) { float[] densities = {(float)terrain.GetValue(x, y, z)*dMultiplyer, (float)terrain.GetValue(x, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y+stepsize, z)*dMultiplyer, (float)terrain.GetValue(x+stepsize, y, z)*dMultiplyer, (float)terrain.GetValue(x,y,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y+stepsize,z+stepsize)*dMultiplyer,(float)terrain.GetValue(x+stepsize,y,z+stepsize)*dMultiplyer }; Vector3[] corners = { new Vector3(x,y,z), new Vector3(x,y+stepsize,z),new Vector3(x+stepsize,y+stepsize,z),new Vector3(x+stepsize,y,z), new Vector3(x,y,z+stepsize), new Vector3(x,y+stepsize,z+stepsize), new Vector3(x+stepsize,y+stepsize,z+stepsize), new Vector3(x+stepsize,y,z+stepsize)}; if (x == 0 && y == 0 && z == 0) { temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners, device); } temp[x / stepsize, y / stepsize, z / stepsize] = new MarchingCube(densities, corners); } } } (terrain is a Perlin Noise generated using libnoise) I'm sure there's probably an easy solution to this but I've been drawing a blank for the past hour. I'm just wondering if the problem is how I'm reading in the data from the noise or if I may be generating the noise wrong? Or maybe the noise is just not good for this kind of generation? If I'm reading it wrong does anyone know the right way? the answers on google were somewhat ambiguous but I'm going to keep searching. Thanks in advance!

    Read the article

  • Write and fprintf for file I/O

    - by Darryl Gove
    fprintf() does buffered I/O, where as write() does unbuffered I/O. So once the write() completes, the data is in the file, whereas, for fprintf() it may take a while for the file to get updated to reflect the output. This results in a significant performance difference - the write works at disk speed. The following is a program to test this: #include <fcntl.h #include <unistd.h #include <stdio.h #include <stdlib.h #include <errno.h #include <stdio.h #include <sys/time.h #include <sys/types.h #include <sys/stat.h static double s_time; void starttime() { s_time=1.0*gethrtime(); } void endtime(long its) { double e_time=1.0*gethrtime(); printf("Time per iteration %5.2f MB/s\n", (1.0*its)/(e_time-s_time*1.0)*1000); s_time=1.0*gethrtime(); } #define SIZE 10*1024*1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT,S_IWGRP|S_IWOTH|S_IWUSR); for (int i=0; i<SIZE; i++) { write(file,"a",1); } close(file); endtime(SIZE); } void test_fprintf() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); } fclose(file); endtime(SIZE); } void test_flush() { starttime(); FILE* file = fopen("./test.dat","w"); for (int i=0; i<SIZE; i++) { fprintf(file,"a"); fflush(file); } fclose(file); endtime(SIZE); } int main() { test_write(); test_fprintf(); test_flush(); } Compiling and running I get 0.2MB/s for write() and 6MB/s for fprintf(). A large difference. There's three tests in this example, the third test uses fprintf() and fflush(). This is equivalent to write() both in performance and in functionality. Which leads to the suggestion that fprintf() (and other buffering I/O functions) are the fastest way of writing to files, and that fflush() should be used to enforce synchronisation of the file contents.

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • RAID1: can't replace faulty spare (marked again as 'faulty spare' within seconds)

    - by user212475
    I got a problem that I cannot solve: Our fileserver runs XUbuntu and 3 RAID1s. One has a problem since monday: it consists of sdb and sdc. sdb was marked as faulty by mdadm for unknown reasons. I used --remove to remove it from the RAID and then to add it by --add. All was fine, re-syncing started but never got above 0% and after a few seconds, sdb was again marked as 'faulty spare' (and therefore the RAID degraded, but clean). So I saved the first 512 byte of the old sdb to a file, bought a new HDD of same size (4TB), shut down the computer and replaced sdb physically, switched the computer back on and wrote the 512 byte back to the new drive to have the same partition info as the old drive (both are the same type, from same company). But the new drive shows the same behaviour as the old: I can add, re-syncing starts and after a few seconds its marked as 'faulty spare'. Here exactly what i did: mdadm --remove /dev/md/1 /dev/sdb maadm --detail /dev/md/1 gives me: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:56:13 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2424 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc mdadm --add /dev/md/1 /dev/sdb mdadm --detail /dev/md/1 gives me: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:49 2013 State : clean, degraded, recovering Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Rebuild Status : 0% complete Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2431 Number Major Minor RaidDevice State 2 8 16 0 faulty spare rebuilding /dev/sdb 1 8 32 1 active sync /dev/sdc and after a few seconds: /dev/md/1: Version : 1.2 Creation Time : Sat Jun 8 22:32:05 2013 Raid Level : raid1 Array Size : 3906887360 (3725.90 GiB 4000.65 GB) Used Dev Size : 3906887360 (3725.90 GiB 4000.65 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Thu Nov 7 06:57:50 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Name : File-Server:1 (local to host File-Server) UUID : 44ed561f:b733e946:e69820f4:aba9b223 Events : 2436 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 32 1 active sync /dev/sdc 2 8 16 - faulty spare /dev/sdb same behaviour if I zero the superblock (mdadm --zero-superblock /dev/sdb) before adding sdb. I do all commands as root and the system holds 3 more 4TB drives, ie the mainboard can handle them. The old harddrive was checked for errors using badblocks, but all is fine. Does anybody have any idea, what the problem is?

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >