Search Results

Search found 3754 results on 151 pages for 'vertex buffer'.

Page 28/151 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Is "Turn Off Windows write-cache buffer flushing" safe on a laptop?

    - by Earlz
    my laptop's internal harddrive is a bit slow. I looked at the drive properties and there are two options: [X] Enable write caching on the device [ ] Turn off windows write-cache buffer flushing on the device As you can see, the first option is checked already, but the second option isn't. I've heard the second option can really speed things up, but it also sounds very risky. Is it safe to do on a laptop that rarely is off of AC power? (but still has battery as well)

    Read the article

  • Data Structures: What are some common examples of problems where "buffers" come into action?

    - by Dark Templar
    I was just wondering if there were some "standard" examples that everyone uses as a basis for explaining the nature of a problem that requires the use of a buffer. What are some well-known problems in the real world that can see great benefits from using a buffer? Also, a little background or explanation as to why the problem benefits from using a buffer, and how the buffer would be implemented, would be insightful for understanding the concept!

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • Prim's MST algorithm implementation with Java

    - by user1290164
    I'm trying to write a program that'll find the MST of a given undirected weighted graph with Kruskal's and Prim's algorithms. I've successfully implemented Kruskal's algorithm in the program, but I'm having trouble with Prim's. To be more precise, I can't figure out how to actually build the Prim function so that it'll iterate through all the vertices in the graph. I'm getting some IndexOutOfBoundsException errors during program execution. I'm not sure how much information is needed for others to get the idea of what I have done so far, but hopefully there won't be too much useless information. This is what I have so far: I have a Graph, Edge and a Vertex class. Vertex class mostly just an information storage that contains the name (number) of the vertex. Edge class can create a new Edge that has gets parameters (Vertex start, Vertex end, int edgeWeight). The class has methods to return the usual info like start vertex, end vertex and the weight. Graph class reads data from a text file and adds new Edges to an ArrayList. The text file also tells us how many vertecis the graph has, and that gets stored too. In the Graph class, I have a Prim() -method that's supposed to calculate the MST: public ArrayList<Edge> Prim(Graph G) { ArrayList<Edge> edges = G.graph; // Copies the ArrayList with all edges in it. ArrayList<Edge> MST = new ArrayList<Edge>(); Random rnd = new Random(); Vertex startingVertex = edges.get(rnd.nextInt(G.returnVertexCount())).returnStartingVertex(); // This is just to randomize the starting vertex. // This is supposed to be the main loop to find the MST, but this is probably horribly wrong.. while (MST.size() < returnVertexCount()) { Edge e = findClosestNeighbour(startingVertex); MST.add(e); visited.add(e.returnStartingVertex()); visited.add(e.returnEndingVertex()); edges.remove(e); } return MST; } The method findClosesNeighbour() looks like this: public Edge findClosestNeighbour(Vertex v) { ArrayList<Edge> neighbours = new ArrayList<Edge>(); ArrayList<Edge> edges = graph; for (int i = 0; i < edges.size() -1; ++i) { if (edges.get(i).endPoint() == s.returnVertexID() && !visited(edges.get(i).returnEndingVertex())) { neighbours.add(edges.get(i)); } } return neighbours.get(0); // This is the minimum weight edge in the list. } ArrayList<Vertex> visited and ArrayList<Edges> graph get constructed when creating a new graph. Visited() -method is simply a boolean check to see if ArrayList visited contains the Vertex we're thinking about moving to. I tested the findClosestNeighbour() independantly and it seemed to be working but if someone finds something wrong with it then that feedback is welcome also. Mainly though as I mentioned my problem is with actually building the main loop in the Prim() -method, and if there's any additional info needed I'm happy to provide it. Thank you. Edit: To clarify what my train of thought with the Prim() method is. What I want to do is first randomize the starting point in the graph. After that, I will find the closest neighbor to that starting point. Then we'll add the edge connecting those two points to the MST, and also add the vertices to the visited list for checking later, so that we won't form any loops in the graph. Here's the error that gets thrown: Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 at java.util.ArrayList.rangeCheck(Unknown Source) at java.util.ArrayList.get(Unknown Source) at Graph.findClosestNeighbour(graph.java:203) at Graph.Prim(graph.java:179) at MST.main(MST.java:49) Line 203: return neighbour.get(0); in findClosestNeighbour() Line 179: Edge e = findClosestNeighbour(startingVertex); in Prim()

    Read the article

  • creating a Menu from SQLite values in Java

    - by shanahobo86
    I am trying to create a ListMenu using data from an SQLite database to define the name of each MenuItem. So in a class called menu.java I have defined the array String classes [] = {}; which should hold each menu item name. In a DBAdapter class I created a function so the user can insert info to a table (This all works fine btw). public long insertContact(String name, String code, String location, String comments, int days, int start, int end, String type) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_NAME, name); initialValues.put(KEY_CODE, code); initialValues.put(KEY_LOCATION, location); initialValues.put(KEY_COMMENTS, comments); initialValues.put(KEY_DAYS, days); initialValues.put(KEY_START, start); initialValues.put(KEY_END, end); initialValues.put(KEY_TYPE, type); return db.insert(DATABASE_TABLE, null, initialValues); } It would be the Strings inserted into KEY_NAME that I need to populate that String array with. Does anyone know if this is possible? Thanks so much for the help guys. If I implement that function by Sam/Mango the program crashes, am I using it incorrectly or is the error due to the unknown size of the array? DBAdapter db = new DBAdapter(this); String classes [] = db.getClasses(); edit: I should mention that if I manually define the array: String classes [] = {"test1", "test2", "test3", etc}; It works fine. The error is a NullPointerException Here's the logcat (sorry about the formatting). I hadn't initialized with db = helper.getReadableDatabase(); in the getClasses() function but unfortunately it didn't fix the problem. 11-11 22:53:39.117: D/dalvikvm(17856): Late-enabling CheckJNI 11-11 22:53:39.297: D/TextLayoutCache(17856): Using debug level: 0 - Debug Enabled: 0 11-11 22:53:39.337: D/libEGL(17856): loaded /system/lib/egl/libGLES_android.so 11-11 22:53:39.337: D/libEGL(17856): loaded /system/lib/egl/libEGL_adreno200.so 11-11 22:53:39.357: D/libEGL(17856): loaded /system/lib/egl/libGLESv1_CM_adreno200.so 11-11 22:53:39.357: D/libEGL(17856): loaded /system/lib/egl/libGLESv2_adreno200.so 11-11 22:53:39.387: I/Adreno200-EGLSUB(17856): <ConfigWindowMatch:2078>: Format RGBA_8888. 11-11 22:53:39.407: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x5c66d000 size:36593664 offset:32825344 fd:65 11-11 22:53:39.417: E/(17856): Can't open file for reading 11-11 22:53:39.417: E/(17856): Can't open file for reading 11-11 22:53:39.417: D/OpenGLRenderer(17856): Enabling debug mode 0 11-11 22:53:39.477: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x5ecd3000 size:40361984 offset:36593664 fd:68 11-11 22:53:40.507: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x61451000 size:7254016 offset:3485696 fd:71 11-11 22:53:41.077: I/Adreno200-EGLSUB(17856): <ConfigWindowMatch:2078>: Format RGBA_8888. 11-11 22:53:41.077: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x61c4c000 size:7725056 offset:7254016 fd:74 11-11 22:53:41.097: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x623aa000 size:8196096 offset:7725056 fd:80 11-11 22:53:41.937: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x62b7b000 size:8667136 offset:8196096 fd:83 11-11 22:53:41.977: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x61c4c000 size:7725056 offset:7254016 11-11 22:53:41.977: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x623aa000 size:8196096 offset:7725056 11-11 22:53:41.977: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x62b7b000 size:8667136 offset:8196096 11-11 22:53:42.167: I/Adreno200-EGLSUB(17856): <ConfigWindowMatch:2078>: Format RGBA_8888. 11-11 22:53:42.177: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x61c5d000 size:17084416 offset:13316096 fd:74 11-11 22:53:42.317: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x63853000 size:20852736 offset:17084416 fd:80 11-11 22:53:42.357: D/OpenGLRenderer(17856): Flushing caches (mode 0) 11-11 22:53:42.357: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x5c66d000 size:36593664 offset:32825344 11-11 22:53:42.357: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x5ecd3000 size:40361984 offset:36593664 11-11 22:53:42.367: D/memalloc(17856): /dev/pmem: Unmapping buffer base:0x61451000 size:7254016 offset:3485696 11-11 22:53:42.757: D/memalloc(17856): /dev/pmem: Mapped buffer base:0x5c56d000 size:24621056 offset:20852736 fd:65 11-11 22:53:44.247: D/AndroidRuntime(17856): Shutting down VM 11-11 22:53:44.247: W/dalvikvm(17856): threadid=1: thread exiting with uncaught exception (group=0x40ac3210) 11-11 22:53:44.257: E/AndroidRuntime(17856): FATAL EXCEPTION: main 11-11 22:53:44.257: E/AndroidRuntime(17856): java.lang.RuntimeException: Unable to instantiate activity ComponentInfo{niall.shannon.timetable/niall.shannon.timetable.menu}: java.lang.NullPointerException 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1891) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1992) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.access$600(ActivityThread.java:127) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1158) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.os.Handler.dispatchMessage(Handler.java:99) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.os.Looper.loop(Looper.java:137) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.main(ActivityThread.java:4441) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.reflect.Method.invokeNative(Native Method) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.reflect.Method.invoke(Method.java:511) 11-11 22:53:44.257: E/AndroidRuntime(17856): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:823) 11-11 22:53:44.257: E/AndroidRuntime(17856): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:590) 11-11 22:53:44.257: E/AndroidRuntime(17856): at dalvik.system.NativeStart.main(Native Method) 11-11 22:53:44.257: E/AndroidRuntime(17856): Caused by: java.lang.NullPointerException 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.content.ContextWrapper.openOrCreateDatabase(ContextWrapper.java:221) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:157) 11-11 22:53:44.257: E/AndroidRuntime(17856): at niall.shannon.timetable.DBAdapter.getClasses(DBAdapter.java:151) 11-11 22:53:44.257: E/AndroidRuntime(17856): at niall.shannon.timetable.menu.<init>(menu.java:15) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.Class.newInstanceImpl(Native Method) 11-11 22:53:44.257: E/AndroidRuntime(17856): at java.lang.Class.newInstance(Class.java:1319) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.Instrumentation.newActivity(Instrumentation.java:1023) 11-11 22:53:44.257: E/AndroidRuntime(17856): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1882) 11-11 22:53:44.257: E/AndroidRuntime(17856): ... 11 more 11-11 22:53:46.527: I/Process(17856): Sending signal. PID: 17856 SIG: 9

    Read the article

  • C++ Microsoft SAPI: How to set Windows text-to-speech output to a memory buffer?

    - by Vladimir
    Hi all, I have been trying to figure out how to "speak" a text into a memory buffer using Windows SAPI 5.1 but so far no success, even though it seems it should be quite simple. There is an example of streaming the synthesized speech into a .wav file, but no examples of how to stream it to a memory buffer. In the end I need to have the synthesized speech in a char* array in 16 kHz 16-bit little-endian PCM format. Currently I create a temp .wav file, redirect speech output there, then read it, but it seems to be a rather stupid solution. Anyone knows how to do that? Thanks!

    Read the article

  • Graph Tour with Uniform Cost Search in Java

    - by user324817
    Hi. I'm new to this site, so hopefully you guys don't mind helping a nub. Anyway, I've been asked to write code to find the shortest cost of a graph tour on a particular graph, whose details are read in from file. The graph is shown below: http://img339.imageshack.us/img339/8907/graphr.jpg This is for an Artificial Intelligence class, so I'm expected to use a decent enough search method (brute force has been allowed, but not for full marks). I've been reading, and I think that what I'm looking for is an A* search with constant heuristic value, which I believe is a uniform cost search. I'm having trouble wrapping my head around how to apply this in Java. Basically, here's what I have: Vertex class - ArrayList<Edge> adjacencies; String name; int costToThis; Edge class - final Vertex target; public final int weight; Now at the moment, I'm struggling to work out how to apply the uniform cost notion to my desired goal path. Basically I have to start on a particular node, visit all other nodes, and end on that same node, with the lowest cost. As I understand it, I could use a PriorityQueue to store all of my travelled paths, but I can't wrap my head around how I show the goal state as the starting node with all other nodes visited. Here's what I have so far, which is pretty far off the mark: public static void visitNode(Vertex vertex) { ArrayList<Edge> firstEdges = vertex.getAdjacencies(); for(Edge e : firstEdges) { e.target.costToThis = e.weight + vertex.costToThis; queue.add(e.target); } Vertex next = queue.remove(); visitNode(next); } Initially this takes the starting node, then recursively visits the first node in the PriorityQueue (the path with the next lowest cost). My problem is basically, how do I stop my program from following a path specified in the queue if that path is at the goal state? The queue currently stores Vertex objects, but in my mind this isn't going to work as I can't store whether other vertices have been visited inside a Vertex object. Help is much appreciated! Josh

    Read the article

  • How do I write the audio stream to a memory buffer instead of a file using DirectShow?

    - by yngvedh
    Hi, I have made a sample application which constructs a filter graph to capture audio from the microphone and stream it to a file. Is there any filter which allows me to stream to a memory buffer instead? I'm following the approach outlined in an article on msdn and are currently using the CLSID_FileWriter object to write the audio to file. This works nicely, but I cannot figure out how to write to a memory buffer. Is there such a memory sink filter or do I have to create it myself? (I would prefer one which is bundled with windows XP)

    Read the article

  • How to do inline paste from system buffer in Vim?

    - by yetapb
    When pasting from the system buffer in a line like foo( someVal , <cursor is here>, someVal3); If I use "*p I get foo( someVal, , someVal3); <pasted text> If I use "*P I get <pasted text> foo( someVal, , someVal3); but I want foo( someVal, <pasted text>, someVal3 ); How can I get the result I want? edit If there is a newline in the buffer as @amardeep suspects, is there a way I can tell vim to ignore it?

    Read the article

  • how do i detect \r\n in a u_char type of buffer?

    - by aDi Adam
    i am trying to construct http content from packet sniffing in C. right now i am able to save all the packets in a file but i want to get rid of the headers in the first packet. they are also being saved as per they are a part of tcp payload. the actual body after the header starts after double "crlf" or \r\n\r\n in http response. how do i detect \r\n so that i can only save the following part of the buffer in the file. the buffer is u_char type. i cant figure out the command or the part i looked on google and other places but i mostly find c# commands, nothing in C.

    Read the article

  • Difference between tcp recv buffer and tcp receive window size?

    - by pradeepchhetri
    The command shows the tcp receive buffer size in bytes. $ cat /proc/sys/net/ipv4/tcp_rmem 4096 87380 4001344 where the three values signifies the min, default and max values respectively. Then I tried to find the tcp window size using tcpdump command. $ sudo tcpdump -n -i eth0 'tcp[tcpflags] & (tcp-syn|tcp-ack) == tcp-syn and port 80 and host google.com' tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 16:15:41.465037 IP 172.16.31.141.51614 > 74.125.236.73.80: Flags [S], seq 3661804272, win 14600, options [mss 1460,sackOK,TS val 4452053 ecr 0,nop,wscale 6], length 0 I got the window size to be 14600 which is 10 times the size of MSS. Can anyone please tell me the relationship between the two.

    Read the article

  • SQL server could not connect: Lacked Sufficient Buffer Space...

    - by chumad
    I recently moved my app to a new server - the app is written in c# against the 3.5 framework. The hardware is faster but the OS is the same (Win Server 2003). No new software is running. On the prior hardware the app would run for months with no problems. Now, in this new install, I get the following error after about 3 days, and the only way to fix it is to reboot: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.) I have yet to find a service I can even shut down to make it work. Anyone had this before and know a solution?

    Read the article

  • Problem on getting the vendor id and product id of a usb device

    - by new
    Hi... my application comes and prints this "\?\usb#vid_04f2&pid_0111#5&1ba5a77f&0&2#{a5dcbf1 0-6530-11d2-901f-00c04fb951ed}" again it goes to while loop .... here it gets breaked in the else statement... Qt Code: if (GetLastError() == ERROR_INSUFFICIENT_BUFFER) { // Change the buffer size. if (buffer) LocalFree(buffer); buffer = (LPTSTR)LocalAlloc(LPTR,buffersize); } else { qDebug ()<<"Here it quits the application"; // Insert error handling here. break; } Any ideas in this.... full source code comes here static GUID GUID_DEVINTERFACE_USB_DEVICE = { 0xA5DCBF10L, 0x6530, 0x11D2, { 0x90, 0x1F, 0x00, 0xC0, 0x4F, 0xB9, 0x51, 0xED } }; HANDLE hInfo = SetupDiGetClassDevs(&GUID_DEVINTERFACE_USB_DEVICE,NULL,NULL, DIGCF_PRESENT | DIGCF_INTERFACEDEVICE); if ( hInfo == INVALID_HANDLE_VALUE ) { qDebug ()<<"invalid"; } else { qDebug ()<<"valid handle"; SP_DEVINFO_DATA DeviceInfoData; DeviceInfoData.cbSize = sizeof(SP_DEVINFO_DATA); SP_INTERFACE_DEVICE_DATA Interface_Info; Interface_Info.cbSize = sizeof(Interface_Info); BYTE Buf[1024]; DWORD i; DWORD InterfaceNumber= 0; PSP_DEVICE_INTERFACE_DETAIL_DATA pspdidd = (PSP_DEVICE_INTERFACE_DETAIL_DATA) Buf; for (i=0;SetupDiEnumDeviceInfo(hInfo,i,&DeviceInfoData);i++) { DWORD DataT; LPTSTR buffer = NULL; DWORD buffersize = 0; while (!SetupDiGetDeviceRegistryProperty( hInfo, &DeviceInfoData,SPDRP_DEVICEDESC, &DataT,(PBYTE)buffer,buffersize, &buffersize)) { if (GetLastError() == ERROR_INSUFFICIENT_BUFFER) { // Change the buffer size. if (buffer) LocalFree(buffer); buffer = (LPTSTR)LocalAlloc(LPTR,buffersize); } else { // Insert error handling here. break; } qDebug ()<<(TEXT("Device Number %i is: %s\n"),i, buffer); if (buffer) LocalFree(buffer); if ( GetLastError() != NO_ERROR && GetLastError() != ERROR_NO_MORE_ITEMS ) { // Insert error handling here. qDebug ()<<"return false"; } InterfaceNumber = 0; // this just returns the first one, you can iterate on this if (SetupDiEnumDeviceInterfaces(hInfo, NULL, &GUID_DEVINTERFACE_USB_DEVICE, InterfaceNumber, &Interface_Info)) { printf("Got interface"); DWORD needed; pspdidd->cbSize = sizeof(*pspdidd); SP_DEVICE_INTERFACE_DETAIL_DATA *pDetData = NULL; DWORD dwDetDataSize = sizeof (SP_DEVICE_INTERFACE_DETAIL_DATA) + 256; pDetData = (SP_DEVICE_INTERFACE_DETAIL_DATA*) malloc (dwDetDataSize); pDetData->cbSize = sizeof (SP_DEVICE_INTERFACE_DETAIL_DATA); SetupDiGetDeviceInterfaceDetail(hInfo,&Interface_Info, pDetData,dwDetDataSize, NULL,&DeviceInfoData); qDebug ()<<pDetData->DevicePath; qDebug ()<<QString::fromWCharArray(pDetData->DevicePath); free(pDetData); } else { printf("\nNo interface"); //ErrorExit((LPTSTR) "SetupDiEnumDeviceInterfaces"); if ( GetLastError() == ERROR_NO_MORE_ITEMS) printf(", since there are no more items found."); else printf(", unknown reason."); } // Cleanup SetupDiDestroyDeviceInfoList(hInfo); } } }

    Read the article

  • C# DirectSound - Capture buffers not continuous

    - by Wizche
    Hi, I'm trying to capture raw data from my line-in using DirectSound. My problem is that, from a buffer to another the data are just inconsistent, if for example I capture a sine I see a jump from my last buffer and the new one. To detected this I use a graph widget to draw the first 500 elements of the last buffer and the 500 elements from the new one: Snapshot I initialized my buffer this way: format = new WaveFormat { SamplesPerSecond = 44100, BitsPerSample = (short)bitpersample, Channels = (short)channels, FormatTag = WaveFormatTag.Pcm }; format.BlockAlign = (short)(format.Channels * (format.BitsPerSample / 8)); format.AverageBytesPerSecond = format.SamplesPerSecond * format.BlockAlign; _dwNotifySize = Math.Max(4096, format.AverageBytesPerSecond / 8); _dwNotifySize -= _dwNotifySize % format.BlockAlign; _dwCaptureBufferSize = NUM_BUFFERS * _dwNotifySize; // my capture buffer _dwOutputBufferSize = NUM_BUFFERS * _dwNotifySize / channels; // my output buffer I set my notifications one at half the buffer and one at the end: _resetEvent = new AutoResetEvent(false); _notify = new Notify(_dwCapBuffer); bpn1 = new BufferPositionNotify(); bpn1.Offset = ((_dwCapBuffer.Caps.BufferBytes) / 2) - 1; bpn1.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); bpn2 = new BufferPositionNotify(); bpn2.Offset = (_dwCapBuffer.Caps.BufferBytes) - 1; bpn2.EventNotifyHandle = _resetEvent.SafeWaitHandle.DangerousGetHandle(); _notify.SetNotificationPositions(new BufferPositionNotify[] { bpn1, bpn2 }); observer.updateSamplerStatus("Events listener initialization complete!\r\n"); And here is how I process the events. /* Process thread */ private void eventReceived() { int offset = 0; _dwCaptureThread = new Thread((ThreadStart)delegate { _dwCapBuffer.Start(true); while (isReady) { _resetEvent.WaitOne(); // Notification received /* Read the captured buffer */ Array read = _dwCapBuffer.Read(offset, typeof(short), LockFlag.None, _dwOutputBufferSize - 1); observer.updateTextPacket("Buffer: " + count.ToString() + " # " + read.GetValue(read.Length - 1).ToString() + " # " + read.GetValue(0).ToString() + "\r\n"); /* Print last/new part of the buffer to the debug graph */ short[] graphData = new short[1001]; Array.Copy(read, graphData, 1000); db.SetBufferDebug(graphData, 500); observer.updateGraph(db.getBufferDebug()); offset = (offset + _dwOutputBufferSize) % _dwCaptureBufferSize; /* Out buffer not used */ /*_dwDevBuffer.Write(0, read, LockFlag.EntireBuffer); _dwDevBuffer.SetCurrentPosition(0); _dwDevBuffer.Play(0, BufferPlayFlags.Default);*/ } _dwCapBuffer.Stop(); }); _dwCaptureThread.Start(); } Any advise? I'm sure I'm failing somewhere in the event processing, but I cant find where. I had developed the same application using the WaveIn API and it worked well. Thanks a lot...

    Read the article

  • Some Async Socket Code - Help with Garbage Collection?

    - by divinci
    Hi all, I think this question is really about my understanding of Garbage collection and variable references. But I will go ahead and throw out some code for you to look at. // Please note do not use this code for async sockets, just to highlight my question // SocketTransport // This is a simple wrapper class that is used as the 'state' object // when performing Async Socket Reads/Writes public class SocketTransport { public Socket Socket; public byte[] Buffer; public SocketTransport(Socket socket, byte[] buffer) { this.Socket = socket; this.Buffer = buffer; } } // Entry point - creates a SocketTransport, then passes it as the state // object when Asyncly reading from the socket. public void ReadOne(Socket socket) { SocketTransport socketTransport_One = new SocketTransport(socket, new byte[10]); socketTransport_One.Socket.BeginRecieve ( socketTransport_One.Buffer, // Buffer to store data 0, // Buffer offset 10, // Read Length SocketFlags.None // SocketFlags new AsyncCallback(OnReadOne), // Callback when BeginRead completes socketTransport_One // 'state' object to pass to Callback. ); } public void OnReadOne(IAsyncResult ar) { SocketTransport socketTransport_One = ar.asyncState as SocketTransport; ProcessReadOneBuffer(socketTransport_One.Buffer); // Do processing // New Read // Create another! SocketTransport (what happens to first one?) SocketTransport socketTransport_Two = new SocketTransport(socket, new byte[10]); socketTransport_Two.Socket.BeginRecieve ( socketTransport_One.Buffer, 0, 10, SocketFlags.None new AsyncCallback(OnReadTwo), socketTransport_Two ); } public void OnReadTwo(IAsyncResult ar) { SocketTransport socketTransport_Two = ar.asyncState as SocketTransport; .............. So my question is: The first SocketTransport to be created (socketTransport_One) has a strong reference to a Socket object (lets call is ~SocketA~). Once the async read is completed, a new SocketTransport object is created (socketTransport_Two) also with a strong reference to ~SocketA~. Q1. Will socketTransport_One be collected by the garbage collector when method OnReadOne exits? Even though it still contains a strong reference to ~SocketA~ Thanks all!

    Read the article

  • Working with a string as an array of characters

    - by Malfunction
    I'm having some trouble with a string represented as an array of characters. What I'd like to do, as I would do in java, is the following: while (i < chars.length) { char ch = chars[i]; if ((WORD_CHARS.indexOf(ch) >= 0) == punctuation) { String token = buffer.toString(); if (token.length() > 0) { parts.add(token); } buffer = new StringBuffer(); } buffer.append(ch); i++; } What I'm doing is something like this: while(i < strlen(chars)) { char ch = chars[i]; if(([WORD_CHARS rangeOfString:ch] >= 0) == punctuation) { NSString *token = buffer.toString(); if([token length] > 0) { [parts addObject:token]; } buffer = [NSMutableString string]; } [buffer append(ch)]; i++; } I'm not sure how I'm supposed to convert String token = buffer.toString(); to objective c, where buffer is an NSMutableString. Also, how do I check this if condition in objective c? if ((WORD_CHARS.indexOf(ch) >= 0) == punctuation) WORD_CHARS is an NSString. I'm also having trouble with appending ch to buffer. Any help is greatly appreciated.

    Read the article

  • Why does setting a geometry shader cause my sprites to vanish?

    - by ChaosDev
    My application has multiple screens with different tasks. Once I set a geometry shader to the device context for my custom terrain, it works and I get the desired results. But then when I get back to the main menu, all sprites and text disappear. These sprites don't dissappear when I use pixel and vertex shaders. The sprites are being drawn through D3D11, of course, with specified view and projection matrices as well an input layout, vertex, and pixel shader. I'm trying DeviceContext->ClearState() but it does not help. Any ideas? void gGeometry::DrawIndexedWithCustomEffect(gVertexShader*vs,gPixelShader* ps,gGeometryShader* gs=nullptr) { unsigned int offset = 0; auto context = mp_D3D->mp_Context; //set topology context->IASetPrimitiveTopology(m_Topology); //set input layout context->IASetInputLayout(mp_inputLayout); //set vertex and index buffers context->IASetVertexBuffers(0,1,&mp_VertexBuffer->mp_Buffer,&m_VertexStride,&offset); context->IASetIndexBuffer(mp_IndexBuffer->mp_Buffer,mp_IndexBuffer->m_DXGIFormat,0); //send constant buffers to shaders context->VSSetConstantBuffers(0,vs->m_CBufferCount,vs->m_CRawBuffers.data()); context->PSSetConstantBuffers(0,ps->m_CBufferCount,ps->m_CRawBuffers.data()); if(gs!=nullptr) { context->GSSetConstantBuffers(0,gs->m_CBufferCount,gs->m_CRawBuffers.data()); context->GSSetShader(gs->mp_D3DGeomShader,0,0);//after this call all sprites disappear } //set shaders context->VSSetShader( vs->mp_D3DVertexShader, 0, 0 ); context->PSSetShader( ps->mp_D3DPixelShader, 0, 0 ); //draw context->DrawIndexed(m_indexCount,0,0); } //sprites void gSpriteDrawer::Draw(gTexture2D* texture,const RECT& dest,const RECT& source, const Matrix& spriteMatrix,const float& rotation,Vector2d& position,const Vector2d& origin,const Color& color) { VertexPositionColorTexture* verticesPtr; D3D11_MAPPED_SUBRESOURCE mappedResource; unsigned int TriangleVertexStride = sizeof(VertexPositionColorTexture); unsigned int offset = 0; float halfWidth = ( float )dest.right / 2.0f; float halfHeight = ( float )dest.bottom / 2.0f; float z = 0.1f; int w = texture->Width(); int h = texture->Height(); float tu = (float)source.right/(w); float tv = (float)source.bottom/(h); float hu = (float)source.left/(w); float hv = (float)source.top/(h); Vector2d t0 = Vector2d( hu+tu, hv); Vector2d t1 = Vector2d( hu+tu, hv+tv); Vector2d t2 = Vector2d( hu, hv+tv); Vector2d t3 = Vector2d( hu, hv+tv); Vector2d t4 = Vector2d( hu, hv); Vector2d t5 = Vector2d( hu+tu, hv); float ex=(dest.right/2)+(origin.x); float ey=(dest.bottom/2)+(origin.y); Vector4d v4Color = Vector4d(color.r,color.g,color.b,color.a); VertexPositionColorTexture vertices[] = { { Vector3d( dest.right-ex, -ey, z),v4Color, t0}, { Vector3d( dest.right-ex, dest.bottom-ey , z),v4Color, t1}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t2}, { Vector3d( -ex, dest.bottom-ey , z),v4Color, t3}, { Vector3d( -ex, -ey , z),v4Color, t4}, { Vector3d( dest.right-ex, -ey , z),v4Color, t5}, }; auto mp_context = mp_D3D->mp_Context; // Lock the vertex buffer so it can be written to. mp_context->Map(mp_vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); // Get a pointer to the data in the vertex buffer. verticesPtr = (VertexPositionColorTexture*)mappedResource.pData; // Copy the data into the vertex buffer. memcpy(verticesPtr, (void*)vertices, (sizeof(VertexPositionColorTexture) * 6)); // Unlock the vertex buffer. mp_context->Unmap(mp_vertexBuffer, 0); //set vertex shader mp_context->IASetVertexBuffers( 0, 1, &mp_vertexBuffer, &TriangleVertexStride, &offset); //set texture mp_context->PSSetShaderResources( 0, 1, &texture->mp_SRV); //set matrix to shader mp_context->UpdateSubresource(mp_matrixBuffer, 0, 0, &spriteMatrix, 0, 0 ); mp_context->VSSetConstantBuffers( 0, 1, &mp_matrixBuffer); //draw sprite mp_context->Draw( 6, 0 ); }

    Read the article

  • problem with loading in .FBX meshes in DirectX 10

    - by N0xus
    I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class. How ever, when I run the program this is what renders: In the debug output window the following errors keep appearing: D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ] D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ] The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running The .fx file is as follows: float4x4 matWorld; float4x4 matView; float4x4 matProjection; struct VS_INPUT { float4 Pos:POSITION; float2 TexCoord:TEXCOORD; }; struct PS_INPUT { float4 Pos:SV_POSITION; float2 TexCoord:TEXCOORD; }; Texture2D diffuseTexture; SamplerState diffuseSampler { Filter = MIN_MAG_MIP_POINT; AddressU = WRAP; AddressV = WRAP; }; // // Vertex Shader // PS_INPUT VS( VS_INPUT input ) { PS_INPUT output=(PS_INPUT)0; float4x4 viewProjection=mul(matView,matProjection); float4x4 worldViewProjection=mul(matWorld,viewProjection); output.Pos=mul(input.Pos,worldViewProjection); output.TexCoord=input.TexCoord; return output; } // // Pixel Shader // float4 PS(PS_INPUT input ) : SV_Target { return diffuseTexture.Sample(diffuseSampler,input.TexCoord); //return float4(1.0f,1.0f,1.0f,1.0f); } RasterizerState NoCulling { FILLMODE=SOLID; CULLMODE=NONE; }; technique10 Render { pass P0 { SetVertexShader( CompileShader( vs_4_0, VS() ) ); SetGeometryShader( NULL ); SetPixelShader( CompileShader( ps_4_0, PS() ) ); SetRasterizerState(NoCulling); } } In my game, the .fx file and model are called and set as follows: Loading in shader file //Set the shader flags - BMD DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS; #if defined( DEBUG ) || defined( _DEBUG ) dwShaderFlags |= D3D10_SHADER_DEBUG; #endif ID3D10Blob * pErrorBuffer=NULL; if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) ) { char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer(); //If the creation of the Effect fails then a message box will be shown MessageBoxA( NULL, pErrorStr, "Error", MB_OK ); return false; } //Get the technique called Render from the effect, we need this for rendering later on m_pTechnique=m_pEffect->GetTechniqueByName("Render"); //Number of elements in the layout UINT numElements = TexturedLitVertex::layoutSize; //Get the Pass description, we need this to bind the vertex to the pipeline D3D10_PASS_DESC PassDesc; m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc ); //Create Input layout to describe the incoming buffer to the input assembler if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) ) { return false; } model loading: m_pTestRenderable=new CRenderable(); //m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices); m_pModelLoader = new CModelLoader(); m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" ); m_pGameObjectTest = new CGameObject(); m_pGameObjectTest->setRenderable( m_pTestRenderable ); // Set primitive topology, how are we going to interpet the vertices in the vertex buffer md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) ) { MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK ); return false; } m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource(); m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource ); Finally, the draw function code: //All drawing will occur between the clear and present m_pViewMatrixVariable->SetMatrix( ( float* )m_matView ); m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() ); //Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex UINT stride = m_pTestRenderable->getStride(); //The offset from start of the buffer to where our vertices are located UINT offset = m_pTestRenderable->getOffset(); ID3D10Buffer * pVB=m_pTestRenderable->getVB(); //Bind the vertex buffer to input assembler stage - md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset ); md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 ); //Get the Description of the technique, we need this in order to loop through each pass in the technique D3D10_TECHNIQUE_DESC techDesc; m_pTechnique->GetDesc( &techDesc ); //Loop through the passes in the technique for( UINT p = 0; p < techDesc.Passes; ++p ) { //Get a pass at current index and apply it m_pTechnique->GetPassByIndex( p )->Apply( 0 ); //Draw call md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0); //m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0); } Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail. Any insight a fresh pair eyes could give on this would be great.

    Read the article

  • Geometry Shader input vertices order

    - by NPS
    MSDN specifies (link) that when using triangleadj type of input to the GS, it should provide me with 6 vertices in specific order: 1st vertex of the triangle processed, vertex of an adjacent triangle, 2nd vertex of the triangle processed, another vertex of an adjacent triangle and so on... So if I wanted to create a pass-through shader (i.e. output the same triangle I got on input and nothing else) I should return vertices 0, 2 and 4. Is that correct? Well, apparently it isn't because I did just that and when I ran my app the vertices were flickering (like changing positions/disappearing/showing again or sth like that). But when I instead output vertices 0, 1 and 2 the app rendered the mesh correctly. I could provide some code but it seems like the problem is in the input vertices order, not the code itself. So what order do input vertices to the GS come in?

    Read the article

  • Coherence Warnings in WLS

    - by john.graves(at)oracle.com
    With 11g (10.3.4 WLS), coherence is now built into many applications.  I’ve been noticing errors in my OSB logs like these:####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340643> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Unic astUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS do cumentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performanc e.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340650> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Pref erredUnicastUdpSocket failed to set receive buffer size to 1428 packets (1.99MB); actual size is 6%, 89 packets (127KB). Consult y our OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal p erformance.> ####<10/03/2011 10:45:40 AM EST> <Warning> <Coherence> <osb-jeos> <osb_server1> <Logger@324239121 3.6.0.4> <<anonymous>> <> <583c1 0bfdbd326ba:-8c38159:12e9d02c829:-8000-0000000000000003> <1299714340659> <BEA-000000> <Oracle Coherence 3.6.0.4 (member=n/a): Mult icastUdpSocket failed to set receive buffer size to 714 packets (1023KB); actual size is 12%, 89 packets (127KB). Consult your OS documentation regarding increasing the maximum socket buffer size. Proceeding with the actual value may cause sub-optimal performa nce.> I was able to “fix” this on my ubuntu system by adding the following lines to the /etc/sysctl.conf file:# Setup networking for coherence # maximum receive socket buffer size, default 131071 net.core.rmem_max = 2000000 # maximum send socket buffer size, default 131071 net.core.wmem_max = 1000000 # default receive socket buffer size, default 65535 net.core.rmem_default = 2524287 # default send socket buffer size, default 65535 net.core.wmem_default = 2524287 .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >