Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 34/151 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • video streaming over http in blackberry

    - by ysnky
    hi all, while i was searching video player over http, i found the article which is located at this url; http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/Stream ing_media_-_Start_to_finish.html?nodeid=2456737&ve rnum=0 i can run by adding ";deviceside=true" at the end of url. it works fine in the jde4.5 simulator. it gets 3gp videos from my local server. i tested with 580kb files and works fine. but when i get the same file from my server (not local, real server) i have problems with big files (e.g 580 kb). it plays 180kb files (but sometimes it does not play this file either) but not plays 580kb file. and also i deployed my application to my 9000 device it sometimes plays small file (180kb) but never plays big file (580kb). why it plays if it is on my local file, not play in real world? i ve stucked for days. hope you help me. and also the code at the url given below is not work, the only code i ve found is the above. blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/How_To _-_Play_video_within_a_BlackBerry_smartphone_appli cation.html?nodeid=1383173&vernum=0 btw, there is no method such as resize(long param) of CircularByteBuffer class. so i comment relavent line (buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); as shown below. public void increaseBufferCapacity(int percent) { if(percent < 0){ log(0, "FAILED! SP.setBufferCapacity() - " + percent); throw new IllegalArgumentException("Increase factor must be positive.."); } synchronized(readLock){ synchronized(connectionLock){ synchronized(userSeekLock){ synchronized(mediaIStream){ log(0, "SP.setBufferCapacity() - " + percent); //buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); this.bufferCapacity = buffer.getSize(); } } } } } thanks in advance.

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • How do I import and call unmanaged C dll with ansi string "char *" pointer string from VB.net?

    - by Warren P
    I have written my own function, which in C would be declared like this, using standard Win32 calling conventions: int Thing( char * command, char * buffer, int * BufSize); I have the following amount of VB figured out, which should import the dll and call this function, wrapping it up to make it easy to call Thing("CommandHere",GetDataBackHere): Imports Microsoft.VisualBasic Imports System.Runtime.InteropServices Imports System Imports System.Text Namespace dllInvocationSpace Public Class dllInvoker ' tried attributes but could not make it build: ' <DllImport("Thing1.dll", False, CallingConvention.Cdecl, CharSet.Ansi, "Baton", True, True, False, True)> Declare Ansi Function Thing Lib "Thing1.dll" (ByVal Command As String, ByRef Buffer As String, ByRef BufferLength As Integer) Shared Function dllCall(ByVal Command As String, ByRef Results As String) As Integer Dim Buffer As StringBuilder = New StringBuilder(65536) Dim retCode As Integer Dim bufsz As Integer bufsz = 65536 retCode = Thing(Command, Buffer, bufsz) Results = Buffer Return retCode End Function End Class End Namespace The current code doesn't build, because although I think I should be able to create a "buffer" that the C Dll can write data back into using a string builder, I haven't got it quite right. (Value of type System.Text.STringBuilder cannot be converted to 'String'). I have looked all over the newsgroups and forums and can not find an example where the C dll needs to pass between 1 and 64kbytes of data back (char *buffer, int bufferlen) to visual basic.net.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Detect TCP connection close when playing Flash video

    - by JoJo
    On the Flash client side, how do I detect when the server purposely closes the TCP connection to its video stream? I'll need to take action when this occurs - maybe attempt to restart the video or display an error message. Currently, the connection closing and the connection being slow look the same to me. The NetStream object ushers a NetStream.Play.Stop event in both cases. When the connection is slow, it usually recovers by itself within seconds. I wish to only take action when the connection is closed, not when it is slow. Here's how my general setup looks like. It's the basic NetConnection-NetStream-Video setup. this.vidConnection = new NetConnection(); this.vidConnection.addEventListener(AsyncErrorEvent.ASYNC_ERROR, this.connectionAsyncError); this.vidConnection.addEventListener(IOErrorEvent.IO_ERROR, this.connectionIoError); this.vidConnection.addEventListener(NetStatusEvent.NET_STATUS, this.connectionNetStatus); this.vidConnection.connect(null); this.vidStream = new NetStream(this.vidConnection); this.vidStream.addEventListener(AsyncErrorEvent.ASYNC_ERROR, this.streamAsyncError); this.vidStream.addEventListener(IOErrorEvent.IO_ERROR, this.streamIoError); this.vidStream.addEventListener(NetStatusEvent.NET_STATUS, this.streamNetStatus); this.vid.attachNetStream(this.vidStream); None of the error events fire when the server closes the TCP or when the connection freezes up. Only the NetStream.Play.Stop event fires. Here's a trace of what happens from initially playing the video to the TCP connection closing. connection net status = NetConnection.Connect.Success playStream(http://192.168.0.44/flv/4d29104a9aefa) NetStream.Play.Start NetStream.Buffer.Flush NetStream.Buffer.Full NetStream.Buffer.Empty checkDimensions 0 0 onMetaData NetStream.Buffer.Full NetStream.Buffer.Flush checkDimensions 960 544 NetStream.Buffer.Empty NetStream.Buffer.Flush NetStream.Play.Stop

    Read the article

  • How do I set the LRECL in C#.NET?

    - by donde
    I have been trying to ftp a dtl file from .net to, what I beleive, is an AS400. The error being reported back to me is: "One or more lines have been truncated" and the admin is saying the file is coming over with 256 lines that have variable length columns. I found this explanation online: we have to establish defaults because no specifics about the file exist. The default RECFM is V and LRECL is 256. This means that SAS will scan the input record looking for the CR & LF to tell us that we are at the EOR. If the marker isn't found within the limit of the LRECL, SAS discards the data from the LRECL value to the end of the record and adds a message to the Log that "One or more lines have been truncated". So I need to set the LRECL. How do I do this in C# .NET? FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpfullpath); ftp.Credentials = new NetworkCredential(user, pwd); ftp.KeepAlive = false; ftp.UseBinary = false; ftp.Method = WebRequestMethods.Ftp.UploadFile; FileStream fs = File.OpenRead(inputfilepath + ftpfileName); byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); fs.Close(); Stream ftpstream = ftp.GetRequestStream(); int i = 0; int intBlock = 1786; int intBuffLeft = buffer.Length; while (i < buffer.Length) { if (intBuffLeft >= 1786) { ftpstream.Write(buffer, i, intBlock); } else { ftpstream.Write(buffer, i, intBuffLeft); } i += intBlock; intBuffLeft -= 1786; } ftpstream.Close();

    Read the article

  • Read whole ASCII file into C++ std::string

    - by Arrieta
    Hello, I need to read a whole file into memory and place it in a C++ std::string. If I were to read it into a char, the answer would be very simple: std::ifstream t; int lenght; t.open("file.txt", "r"); // open input file t.seekg(0, std::ios::end); // go to the end length = t.tellg(); // report location (this is the lenght) t.seekg(0, std::ios::beg); // go back to the beginning buffer = new char[length]; // allocate memory for a buffer of appropriate dimension t.read(buffer, length); // read the whole file into the buffer t.close(); // close file handle // ... do stuff with buffer here ... Now, I want to do the exact same thing, but using a std::string instead of a char. I want to avoid loops, i. e., I don't want to: std::ifstream t; t.open("file.txt", "r"); std::string buffer; std::string line; while(t){ std::getline(t, line); // ... append line to buffer and go on } t.close() any ideas?

    Read the article

  • Why do I need to close fds when reading and writing to the pipe?

    - by valentimsousa
    However, what if one of my processes needs to continuously write to the pipe while the other pipe needs to read? This example seems to work only for one write and one read. I need multi read and write void executarComandoURJTAG(int newSock) { int input[2], output[2], estado, d; pid_t pid; char buffer[256]; char linha[1024]; pipe(input); pipe(output); pid = fork(); if (pid == 0) {// child close(0); close(1); close(2); dup2(input[0], 0); dup2(output[1], 1); dup2(output[1], 2); close(input[1]); close(output[0]); execlp("jtag", "jtag", NULL); } else { // parent close(input[0]); close(output[1]); do { read(newSock, linha, 1024); /* Escreve o buffer no pipe */ write(input[1], linha, strlen(linha)); close(input[1]); while ((d = read(output[0], buffer, 255))) { //buffer[d] = '\0'; write(newSock, buffer, strlen(buffer)); puts(buffer); } write(newSock, "END", 4); } while (strcmp(linha, "quit") != 0); } }

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

  • ORA-06502: PL/SQL: numeric or value error: character string buffer too small with Oracle aggregate f

    - by Tunde
    Good day gurus, I have a script that populates tables on a regular basis that crashed and gave the above error. The strange thing is that it has been running for close to 3 months on the production system with no problems and suddenly crashed last week. There has not been any changes on the tables as far as I know. Has anyone encountered something like this before? I believe it has something to do with the aggregate functions I'm implementing in it; but it worked initially. please; kindly find attached the part of the script I've developed into a procedure that I reckon gives the error. CREATE OR REPLACE PROCEDURE V1 IS --DECLARE v_a VARCHAR2(4000); v_b VARCHAR2(4000); v_c VARCHAR2(4000); v_d VARCHAR2(4000); v_e VARCHAR2(4000); v_f VARCHAR2(4000); v_g VARCHAR2(4000); v_h VARCHAR2(4000); v_i VARCHAR2(4000); v_j VARCHAR2(4000); v_k VARCHAR2(4000); v_l VARCHAR2(4000); v_m VARCHAR2(4000); v_n NUMBER(10); v_o VARCHAR2(4000); -- -- Procedure that populates DEMO table BEGIN -- Delete all from the DEMO table DELETE FROM DEMO; -- Populate fields in DEMO from DEMOV1 INSERT INTO DEMO(ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, BYR, ENY, ONG, SUMM, DTW, REV, LD, MD, STAT, CRD) SELECT ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, TO_NUMBER(TO_CHAR(BYR,'YYYY')), TO_NUMBER(TO_CHAR(NVL(ENY,SYSDATE),'YYYY')), CASE WHEN ENY IS NULL THEN 'Y' ELSE 'N' END, SUMMARY, DTW, REV, LD, MD, '1', SYSDATE FROM DEMOV1; -- LOOP THROUGH DEMO TABLE FOR j IN (SELECT ID, CTR_ID, C_ID FROM DEMO) LOOP Select semic_concat(TXTDESC) INTO v_a From GEOT WHERE ID = j.ID; SELECT COUNT(*) INTO v_n FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID AND PROAC IS NULL; IF (v_n > 0) THEN Select semic_concat(PRO) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; ELSE Select semic_concat(PRO || '(' || PROAC || ')' ) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; END IF; Select semic_concat(VOCNAME('P02',COD)) INTO v_c From PAR WHERE ID = j.ID; Select semic_concat(VOCNAME('L05',COD)) INTO v_d From INST WHERE ID = j.ID; Select semic_concat(NVL(AUTHOR,'Anon') ||' ('||to_char(PUB,'YYYY')||') '||TITLE||', '||EDT) INTO v_e From REFE WHERE ID = j.ID; Select semic_concat(NAM) INTO v_f FROM EDM E, EDO EO WHERE E.EDMID = EO.EDOID AND ID = j.ID; Select semic_concat(VOCNAME('L08', COD)) INTO v_g FROM AVA WHERE ID = j.ID; SELECT or_concat(NAM) INTO v_o FROM CON WHERE ID = j.ID AND NAM = 'Unknown'; IF (v_o = 'Unknown') THEN Select or_concat(JOBTITLE || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; ELSE Select or_concat(NAM || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; IF (v_i = ',') THEN v_i := null; ELSE Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; IF (v_j = ',') THEN v_j := null; ELSE Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; IF (v_k = ',') THEN v_k := null; ELSE Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; END IF; Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; IF (v_l = ',') THEN v_l := null; ELSE Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; IF (v_m = ',') THEN v_m := null; ELSE Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; END IF; -- UPDATE DEMO TABLE UPDATE DEMO SET GEOC = v_a, PRO = v_b, PAR = v_c, INS = v_d, REFER = v_e, ORGR = v_f, AVAY = v_g, CON = v_h, DTH = v_i, INST = v_j, SA = v_k, CC = v_l, EDPR = v_m, CTR = (SELECT NAM FROM EDM WHERE EDMID = j.CTR_ID), COLL = (SELECT NAM FROM EDM WHERE EDMID = j.C_ID) WHERE ID = j.ID; END LOOP; END V1; / The aggregate functions, commaencap_concat (encapsulates with a comma), or_concat (concats with an or) and semic_concat(concats with a semi-colon). the remaining tables used are all linked to the main table DEMO. I have checked the column sizes and there seems to be no problem. I tried executing the SELECT statements alone and they give the same error without populating the tables. Any clues? Many thanks for your anticipated support.

    Read the article

  • Groovy Grails, How do you stream or buffer a large file in a Controller's response?

    - by Julian Noye
    Hi Guys I have a controller that makes a connection to a url to retrieve a csv file. I am able to send the file in the response using the following code, this works fine. def fileURL = "www.mysite.com/input.csv" def thisUrl = new URL(fileURL); def connection = thisUrl.openConnection(); def output = connection.content.text; response.setHeader "Content-disposition", "attachment; filename=${'output.csv'}" response.contentType = 'text/csv' response.outputStream << output response.outputStream.flush() However I don't think this method is inappropriate for a large file, as the whole file is loaded into the controllers memory. I want to be able to read the file chunk by chunk and write the file to the response chunk by chunk. Any ideas?

    Read the article

  • WSASend() with more than one buffer - could complete incomplete?

    - by Poni
    Say I post the following WSASend call (Windows I/O completion ports without callback functions): void send_data() { WSABUF wsaBuff[2]; wsaBuff[0].len = 20; wsaBuff[1].len = 25; WSASend(sock, &wsaBuff[0], 2, ......); } When I get the "write_done" notification from the completion port, is it possible that wsaBuff[1] will be sent completely (25 bytes) yet wsaBuff[0] will be only partially sent (say 7 bytes)?

    Read the article

  • How to get the data buffer of screen in screen stack in Android?

    - by Glory Jiang
    Currently I want to develop one Activity to allow the user to see the screens/activites of all running tasks , then the user can select one of them to switch it to foreground. As I known, HTC Sence seems already to have this implementation for Home screen, to display the thumbnail of all Home panels in one screen. Does anybody know how to access screen stack in Android? Thanks.

    Read the article

  • How to replace a region in emacs with yank buffer contents?

    - by Mad Wombat
    When I use VIM or most modeless editors (Eclipse, NetBeans etc.) I frequently do the following. If I have similar text blocks and I need to change them all, I will change one, copy it (or use non-deleting yank), select next block I need and paste the changed version over it. If I do the same thing in emacs (select region and paste with C-y), it doesn't replace the region, it just pastes at the cursor position. What is the way to do this in emacs?

    Read the article

  • VST plugin : using FFT on audio input buffer with arbitrary size, how ?

    - by Led
    I'm getting interested in programming a VST plugin, and I have a basic knowledge of audio dsp's and FFT's. I'd like to use VST.Net, and I'm wondering how to implement an FFT-based effect. The process-code looks like public override void Process(VstAudioBuffer[] inChannels, VstAudioBuffer[] outChannels) If I'm correct, normally the FFT would be applied on the input, some processing would be done on the FFT'd data, and then an inverse-FFT would create the processed soundbuffer. But since the FFT works on a specified buffersize that will most probably be different then the (arbitrary) amount of input/output-samples, how would you handle this ?

    Read the article

  • How do I build a python string from a raw (binary) ctype buffer?

    - by fcrazy
    I'm playing with Python and ctypes and I can't figure out how to resolve this problem. I call to a C function which fills a raw binary data. My code looks like this: class Client(): def __init__(self): self.__BUFSIZE = 1024*1024 self.__buf = ctypes.create_string_buffer(self.__BUFSIZE) self.client = ctypes.cdll.LoadLibrary(r"I:\bin\client.dll") def do_something(self): len_written = self.client.fill_raw_buffer(self.__buf, self.__BUFSIZE) my_string = repr(self.__buf.value) print my_string The problem is that I'm receiving binary data (with 0x00) and it's truncated when I tried to build my_string. How can I build my_string if self._buf contains null bytes 0x00? Any idea is welcome. Thanks

    Read the article

  • In OCaml, how can I create an out_channel which writes to a string/buffer instead of a file on disk

    - by Tianyi Cui
    I have a function of type in_channel -> out_channel -> unit which will output something to an out_channel. Now I'd like to get its output as a string. Creating temporary files to write and read it back seems ugly, so how can I do that? Is there any other methods to create out_channel besides Pervasives.open_out family? Actually, this function implemented a repl. What I really need is to test it programmatically, so I'd like to first wrap it to a function of type string -> string. For creating the in_channel, it seems I can use Scanf.Scanning.from_string, but I don't know how to create the out_channel parameter.

    Read the article

  • c++: at what point should I start using "new char[N]" vs a static buffer "char[Nmax]"

    - by dan
    My question is with regard to C++ Suppose I write a function to return a list of items to the caller. Each item has 2 logical fields: 1) an int ID, and 2) some data whose size may vary, let's say from 4 bytes up to 16Kbytes. So my question is whether to use a data structure like: struct item { int field1; char field2[MAX_LEN]; OR, rather, to allocate field2 from the heap, and require the caller to destroy when he's done: struct item{ int field1; char *field2; // new char[N] -- destroy[] when done! Since the max size of field #2 is large, is makes sense that this would be allocated from the heap, right? So once I know the size N, I call field2 = new char[N], and populate it. Now, is this horribly inefficient? Is it worse in cases where N is always small, i.e. suppose I have 10000 items that have N=4?

    Read the article

  • How do you create a cbuffer or global variable that is gpu modifiable?

    - by bobobobo
    I'm implementing tonemapping in a pixel shader, for hdr lighting. The vertex shader outputs vertices with colors. I need to find the max color and save it in a global. However when I try and write the global in my hlsl code, //clamp the max color below by this color clamp( maxColor, output.color, float4( 1e6,1e6,1e6,1e6 ) ) ; I see: error X3025: global variables are implicitly constant, enable compatibility mode to allow modification What is the correct way to declare a shader global in d3d11 that the vertex shader can write to, and the pixel shader can read? I realize this is a bit tough since the vertex shaders are supposed to run in parallel, and introducing a shader global that they all write to means a lock..

    Read the article

  • Geometry instancing in OpenGL ES 2.0

    - by seahorse
    I am planning to do geometry instancing in OpenGL ES 2.0 Basically I plan to render the same geometry(a chair) maybe 1000 times in my scene. What is the best way to do this in OpenGL ES 2.0? I am considering passing model view mat4 as an attribute. Since attributes are per vertex data do I need to pass this same mat4, three times for each vertex of the same triangle(since modelview remains constant across vertices of the triangle). That would amount to a lot of extra data sent to the GPU( 2 extra vertices*16 floats*(Number of triangles) amount of extra data). Or should I be sending the mat4 only once per triangle?But how is that possible using attributes since attributes are defined as "per vertex" data? What is the best and efficient way to do instancing in OpenGL ES 2.0?

    Read the article

  • Drawing simple geometric figures with DrawUserPrimitives?

    - by Navy Seal
    I'm trying to draw a simple triangle based on an array of vertex. I've been searching for a tutorial and I found a simple example on riemers but I couldn't get it to work. I think it was made for XNA 3 and it seems there were some changes to XNA 4? Using this example: http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series1/The_first_triangle.php I get this error: Additional information: The current vertex declaration does not include all the elements required by the current vertex shader. TextureCoordinate0 is missing. I'm not english so I'm having some trouble to understand everything. For what I understand error is confusing because I'm trying to draw a triangle color based and not texture based and it shouldn't need a texture. Also I saw some articles about dynamic shadows and lights and I would like to know if this is the kind of code used to do it with some tweaks like culling because I'm wondering if its heavy code for performance in real time.

    Read the article

  • Outline Shader Effect for Orthogonal Geometry in XNA

    - by Griffin
    I just recently started learning the art of shading, but I can't give an outline width to 2D, concave geometry when restrained to a single vertex/pixel shader technique (thanks to XNA). the shape I need to give an outline to has smooth, per-vertex coloring, as well as opacity. The outline, which has smooth, per-vertex coloring, variable width, and opacity cannot interfere with the original shape's colors. A pixel depth border detection algorithm won't work because pixel depth isn't a 3.0 semantic. expanding geometry / redrawing won't work because it interferes with the original shape's colors. I'm wondering if I can do something with the stencil/depth buffer outside of the shader functions since I have access to that through the graphics device. But I don't believe I'm able to manipulate actual values. How might I do this?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >