Search Results

Search found 3512 results on 141 pages for 'circular buffer'.

Page 15/141 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • performing simple buffer overflow on Mac os 10.6

    - by REALFREE
    I'm trying to learn about stack base overflow and write a simple code to exploit stack. But somehow it doesn't work at all but showing only Abort trap on my machine (mac os leopard) I guess Mac os treats overflow differently, it won't allow me to overwrite memory through c code. for example, strcpy(buffer, input) // lets say char buffer[6] but input is 7 bytes on Linux machine, this code successfully overwrite next stack, but prevented on mac os (Abort trap) Anyone know how to perform a simple stack-base overflow on mac machine?

    Read the article

  • Redirecting exec output to a buffer or file

    - by devin
    I'm writing a C program where I fork(), exec(), and wait(). I'd like to take the output of the program I exec'ed to write it to file or buffer. For example, if I exec ls I want to write file1 file2 etc to buffer/file. I don't think there is a way to read stdout, so does that mean I have to use a pipe? Is there a general procedure here that I haven't been able to find?

    Read the article

  • understanding z buffer formates direct x

    - by numerical25
    A z buffer is just a 3d array that shows what object should be written in front of another object. each element in the array represents a pixel that holds a value from 0.0 to 1.0. My question is if that is all a z buffer does, then why are some buffers 24bit, 32bit, and 16 bit ??

    Read the article

  • MFC: Reading entire file to buffer...

    - by deostroll
    I've meddled with some code but I am unable to read the entire file properly...a lot of junk gets appended to the output. How do I fix this? // wmfParser.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include "wmfParser.h" #include <cstring> #ifdef _DEBUG #define new DEBUG_NEW #endif // The one and only application object CWinApp theApp; using namespace std; int _tmain(int argc, TCHAR* argv[], TCHAR* envp[]) { int nRetCode = 0; // initialize MFC and print and error on failure if (!AfxWinInit(::GetModuleHandle(NULL), NULL, ::GetCommandLine(), 0)) { // TODO: change error code to suit your needs _tprintf(_T("Fatal Error: MFC initialization failed\n")); nRetCode = 1; } else { // TODO: code your application's behavior here. CFile file; CFileException exp; if( !file.Open( _T("c:\\sample.txt"), CFile::modeRead, &exp ) ){ exp.ReportError(); cout<<'\n'; cout<<"Aborting..."; system("pause"); return 0; } ULONGLONG dwLength = file.GetLength(); cout<<"Length of file to read = " << dwLength << '\n'; /* BYTE* buffer; buffer=(BYTE*)calloc(dwLength, sizeof(BYTE)); file.Read(buffer, 25); char* str = (char*)buffer; cout<<"length of string : " << strlen(str) << '\n'; cout<<"string from file: " << str << '\n'; */ char str[100]; file.Read(str, sizeof(str)); cout << "Data : " << str <<'\n'; file.Close(); cout<<"File was closed\n"; //AfxMessageBox(_T("This is a test message box")); system("pause"); } return nRetCode; }

    Read the article

  • Win32 C/C++ Load Image from memory buffer

    - by Bruno
    I want to load a image (.bmp) file on a Win32 application, but I do not want to use the standard LoadBitmap/LoadImage from Windows API: I want it to load from a buffer that is already in memory. I can easily load a bitmap directly from file and print it on the screen, but this issue is making me stuck :( What I'm looking for is a function that works like this: HBITMAP LoadBitmapFromBuffer(char* buffer, int width, int height); Thanks.

    Read the article

  • How to copy depth buffer to CPU memory in DirectX?

    - by Ashwin
    I have code in OpenGL that uses glReadPixels to copy the depth buffer to a CPU memory buffer: glReadPixels(0, 0, w, h, GL_DEPTH_COMPONENT, GL_FLOAT, dbuf); How do I achieve the same in DirectX? I have looked at a similar question which gives the solution to copy the RGB buffer. I've tried to write similar code to copy the depth buffer: IDirect3DSurface9* d3dSurface; d3dDevice->GetDepthStencilSurface(&d3dSurface); D3DSURFACE_DESC d3dSurfaceDesc; d3dSurface->GetDesc(&d3dSurfaceDesc); IDirect3DSurface9* d3dOffSurface; d3dDevice->CreateOffscreenPlainSurface( d3dSurfaceDesc.Width, d3dSurfaceDesc.Height, D3DFMT_D32F_LOCKABLE, D3DPOOL_SCRATCH, &d3dOffSurface, NULL); // FAILS: D3DERR_INVALIDCALL D3DXLoadSurfaceFromSurface( d3dOffSurface, NULL, NULL, d3dSurface, NULL, NULL, D3DX_FILTER_NONE, 0); // Copy from offscreen surface to CPU memory ... The code fails on the call to D3DXLoadSurfaceFromSurface. It returns the error value D3DERR_INVALIDCALL. What is wrong with my code?

    Read the article

  • Clarification On Write-Caching Policy, Its Underlying Options And How It Applies To Hard Drives And Solid-State Drives

    - by Boris_yo
    In last week after doing more research on subject matter, I have been wondering about what I have been neglecting all those years to understand write-caching policy, always leaving it on default setting. Write-caching policy improves writing performance and consists of write-back caching and write-cache buffer flushing. This is how I understand all the above, but correct me if I erred somewhere: Write-through cache / Write-through caching itself is not a part of write caching policy per se and it's when data is written to both cache and storage device so if Windows will need that data later again, it is retrieved from cache and not from storage device which means only improved read performance as there is no need for waiting for storage device to read required data again. Since data is still written to storage device, write performance isn't improved and represents no risk of data loss or corruption in case of power failure or system crash while only data in cache gets lost. This option seems to be enabled by default and is recommended for removable devices with no need to use function of "Safely Remove Hardware" on user's part. Write-back caching is similar to above but without writing data to storage device, periodically releasing data from cache and writing to storage device when it is idle. In my opinion this option improves both read and write performance but represents risk if power failure or system crash occurs with the outcome of not only losing data eventually to be written to storage device, but causing file inconsistencies or corrupted file system. Write-back caching cannot be enabled together with write-through caching and it is not recommended to be enabled if no backup power supply is availabe. Write-cache buffer flushing I reckon is similar to write-back caching but enables immediate release and writing of data from cache to storage device right before power outage occurs but I don't know if it applies also to occasional system crash. This option seem to be complementary to write-back cache reducing or potentially eliminating risk of data loss or corruption of file system. I have questions about relevance of last 2 options to today's modern SSDs in order to get best performance and with less wear on SSDs: I know that traditional hard drives come with onboard cache (I wonder what type of cache that is), but do SSDs also come with cache? Assuming they do, is this cache faster than their NAND flash and system RAM and worth taking the risk of utilizing it by enabling write-back cache? I read somewhere that generally storage device's cache is faster than RAM, but I want to be sure. Additionally I read that write-caching should be enabled since current data that is to be written later to NAND flash is kept for a while in cache and provided there is data that gets modified a lot before finally being written, holding of this data and its periodic release reduces its write times to SSD thereby reducing its wearing. Now regarding to write-cache buffer flushing, I heard that SSD controllers are so fast by themselves that enabling this option is not required, because they manage flushing. However, once again, I don't know if SSDs have their own onboard cache and whether or not it is faster than their NAND flash and system RAM because if it is, keeping this option enabled would make sense. Recently I have posted question about issue with my Intel 330 SSD 120GB which was main reason to do deeper research having suspicion of write-caching policy being the culprit of SSD's freezing issue assuming data being released is what causes freezes. Currently I have write-cache enabled and write-cache buffer flushing disabled because I believe SSD controller's management of write-cache flushing and Windows write-cache buffer flushing are conflicting with each other: Since I want to troubleshoot in small steps to finally determine the source of issue, I have decided to start with write-caching policy and the move to drivers, switching to AHCI later on and finally disabling DIPM (device initiated power management) through registry modification thanks to @TomWijsman

    Read the article

  • [SOLVED] How to create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • How could I create a FBO with stencil buffer in OpenGL ES 2.0?

    - by Alphones
    I need stencil buffer on 3GS to render planar shadow, and polygon offset won't work prefect, still has z-fighting problem. So I use stencil buffer to make the shadow correct, it works on win32 gles2 emulator, but not on iPhone. After I added a post effect to the whole scene. The stencil buffer won't work even on win32 gles2 emulator. And I tried to attach a stencil buffer to FBO, buf the screen turns to black. Here's my code, glGenRenderbuffers(1, &dbo); // depth buffer glBindRenderbuffer(GL_RENDERBUFFER, dbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24_OES, widthGL, heightGL); glGenRenderbuffers(1, &sbo); // stencil buffer glBindRenderbuffer(GL_RENDERBUFFER, sbo); glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, widthGL, heightGL); glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, dbo); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, sbo); // this make the whole screen black. The eglContext is created with STENCIL_SIZE=8, it works without a RTT. I tried to change the RenderbufferStorage for both depth buffer and stencil buffer, but none of them works. Is there anything I have missed? Does the stencil buffer pack with depth buffer? (I cannot find things like GL_DEPTH24_STENCIL8 ...)

    Read the article

  • PHP output buffer settings ignored by server

    - by Ecom Evolution
    I have been trying to flush the output of certain scripts to the browser on demand, but they do not work on our production server. For instance, I tried running the "Phoca Changing Collation tool" (find it on Google) and I don't see any output until the script finishes executing. I've tried immediately flushing the buffer on other scripts that work fine on any server but this one using the following code: echo "something"; ob_flush(); flush(); Setting "ob_implicit_flush(1);" doesn't help either. The server is Apache 2.2.21 with PHP 5.2.17 running on Linux. You can see our php.ini file here if that will help: http://www.smallfiles.org/download/1123/php.ini.html This isn't the only problem we are having with the server ignoring in-script directives. The server also ignores timeout coding such as: ini_set('max_execution_time', 900*60); AND set_time_limit(86400); Script always times out at the php.ini default. Doesn't seem to matter if script is executed in IE or Firefox. Tried "ini_set('zlib.output_compression_level', 'Off');" and checked that it is "Off" in the php.ini file. The code "apache_setenv('no-gzip', 1);" causes a fatal error so tried uploading a .htaccess file with the "mod_gzip_on No" directive. Neither helps. Tried running Apache as fcgi and suphp, but same results. Server is NOT in safe mode. Pullin ma hair out!

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Circular references in TFS Database Edition

    - by Jaco Pretorius
    I'm using TFS Database Edition to script a number of databases. Many of the databases have references between them - for example, view in database A might do select ... from B..TableX This works fine as long as database B is also a project in the solution. The problem comes in when I have objects in database A referencing database B and database B referencing objects in database A. It seems like Visual Studio needs to build the projects in order which is obviously not possible in this case. How do you deal with circular references between database projects in TFS database edition?

    Read the article

  • Non-blocking TCP buffer issues.

    - by Poni
    Hi! I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets). Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data. I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP. Take this for example: void some_function() { char cBuff[1024]; // filling cBuff with some data WSASend(...); // sending cBuff, non-blocking mode // filling cBuff with other data WSASend(...); // again, sending cBuff // ..... and so forth! } If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes. Correct? Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'? And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.

    Read the article

  • Circular reference fix?

    - by SXMC
    Hi! I have a Player class in a separate unit as follows: TPlayer = class private ... FWorld: TWorld; ... public ... end; I also have a World class in a separate unit as follows: TWorld = class private ... FPlayer: TPlayer; ... public ... end; I have done it this way so that the Player can get data from the world via FWorld, and so that the other objects in the world can get the player data in a similar manner. As you can see this results in a circular reference (and therefore does not work). I have read that this implies bad code design, but I just can't think of any better other way. What could be a better way to do it? Cheers!

    Read the article

  • castle IOC - resolving circular references

    - by Frederik
    Hi quick question for my MVP implementation: currently I have the code below, in which both the presenter and view are resolved via the container. Then the presenter calls View.Init to pass himself to the view. I was wondering however if there is a way to let the container fix my circular reference (view - presenter, presenter - view). class Presenter : IPresenter { private View _view; public Presenter(IView view, ...){ _view = view; _view.Init(this) } } class View : IView { private IPresenter _presenter; public void Init(IPresenter presenter){ _presenter = presenter; } } Kind regards Frederik

    Read the article

  • EVP_PKEY from char buffer in x509 (PKCS7)

    - by sid
    Hi All, I have a DER certificate from which I am retrieving the Public key in unsigned char buffer as following, is it the right way of getting? pStoredPublicKey = X509_get_pubkey(x509); if(pStoredPublicKey == NULL) { printf(": publicKey is NULL\n"); } if(pStoredPublicKey->type == EVP_PKEY_RSA) { RSA *x = pStoredPublicKey->pkey.rsa; bn = x->n; } else if(pStoredPublicKey->type == EVP_PKEY_DSA) { } else if(pStoredPublicKey->type == EVP_PKEY_EC) { } else { printf(" : Unkown publicKey\n"); } //extracts the bytes from public key & convert into unsigned char buffer buf_len = (size_t) BN_num_bytes (bn); key = (unsigned char *)malloc (buf_len); n = BN_bn2bin (bn, (unsigned char *) key); for (i = 0; i < n; i++) { printf("%02x\n", (unsigned char) key[i]); } keyLen = EVP_PKEY_size(pStoredPublicKey); EVP_PKEY_free(pStoredPublicKey); And, With this unsigned char buffer, How do I get back the EVP_PKEY for RSA? OR Can I use following ???, EVP_PKEY *d2i_PublicKey(int type, EVP_PKEY **a, unsigned char **pp, long length); int i2d_PublicKey(EVP_PKEY *a, unsigned char **pp);

    Read the article

  • Typescript + requirejs: How to handle circular dependencies?

    - by Aymeric Gaurat-Apelli
    I am in the process of porting my JS+requirejs code to typescript+requirejs. One scenario I haven't found how to handle is circular dependencies. Require.js returns undefined on modules that are also dependent on the current and to solve this problem you can do: MyClass.js define(["Modules/dataModel"], function(dataModel){ return function(){ dataModel = require("Modules/dataModel"); ... } }); Now in typescript, I have: MyClass.ts import dataModel = require("Modules/dataModel"); class MyClass { dataModel: any; constructor(){ this.dataModel = require("Modules/dataModel"); // <- this kind of works but I lose typechecking ... } } How to call require a second time and yet keep the type checking benefits of typescript? dataModel is a module { ... }

    Read the article

  • Symbian: clear buffer of RSocket object

    - by Heinz
    Hi, I have to come back once again to sockets in Symbian. Code to set up a connection to a remote server looks as follows: TInetAddr serverAddr; TUint iPort=111; TRequestStatus iStatus; TSockXfrLength len; TInt res = iSocketSrv.Connect(); res = iSocket.Open(iSocketSrv,KAfInet,KSockStream, KProtocolInetTcp); res = iSocket.SetOpt(KSoTcpSendWinSize, KSolInetTcp, 0x10000); serverAddr.SetPort(iPort); serverAddr.SetAddress(INET_ADDR(11,11,179,154)); iSocket.Connect(serverAddr,iStatus); User::WaitForRequest(iStatus); Over the iSocket i receive packets of variable size. On very few occurences it happens that such a packet is corrupted. What I would like to do then is to clear all the data that is currently in the iSocket buffer and ready to be read. I have not seen any method of RSocket that allows me to clear the content of the buffer. Does anyone know how to do that? If possible, I would like to avoid using RecvOneOrMore() or similar recv function clear the buffer Thanks

    Read the article

  • Circular Reference for Parent Link on Work Item cannot be resolved by Retry

    - by Atters
    I receive the following error; OH-TFS-Connector-0051: Operation failed getCollectionMetaData. Server Error : TF201063: Adding a Parent link to work item 1737 would result in a circular relationship. To create this link, evaluate the existing links, and remove one of the other links in the cycle. I have completely flattened out the Work Items in the source project. When retrying the migration the timestamp is modified on the pending errors however the issues are not resolved. These Work Items now have no parents or children in the source project. So I'm wondering if the retry list is no longer valid but there doesn't appear to be a away to have it update? I can run the whole migration again, however it takes 5-7 hours to just do the work items so it would be great if there is a quick fix.

    Read the article

  • Is there a circular hash function?

    - by Phil H
    Thinking about this question on testing string rotation, I wondered: Is there was such thing as a circular/cyclic hash function? E.g. h(abcdef) = h(bcdefa) = h(cdefab) etc Uses for this include scalable algorithms which can check n strings against each other to see where some are rotations of others. I suppose the essence of the hash is to extract information which is order-specific but not position-specific. Maybe something that finds a deterministic 'first position', rotates to it and hashes the result? It all seems plausible, but slightly beyond my grasp at the moment; it must be out there already...

    Read the article

  • C#: How to resolve this circular dependency?

    - by Rosarch
    I have a circular dependency in my code, and I'm not sure how to resolve it. I am developing a game. A NPC has three components, responsible for thinking, sensing, and acting. These components need access to the NPC controller to get access to its model, but the controller needs these components to do anything. Thus, both take each other as arguments in their constructors. ISenseNPC sense = new DefaultSenseNPC(controller, worldQueryEngine); IThinkNPC think = new DefaultThinkNPC(sense); IActNPC act = new DefaultActNPC(combatEngine, sense, controller); controller = new ControllerNPC(act, think); (The above example has the parameter simplified a bit.) Without act and think, controller can't do anything, so I don't want to allow it to be initialized without them. The reverse is basically true as well. What should I do? ControllerNPC using think and act to update its state in the world: public class ControllerNPC { // ... public override void Update(long tick) { // ... act.UpdateFromBehavior(CurrentBehavior, tick); CurrentBehavior = think.TransitionState(CurrentBehavior, tick); } // ... } DefaultSenseNPC using controller to determine if it's colliding with anything: public class DefaultSenseNPC { // ... public bool IsCollidingWithTarget() { return worldQuery.IsColliding(controller, model.Target); } // ... }

    Read the article

  • Circular Dependencies in XML file

    - by user3006081
    my android xml layout file keeps on Exception raised during rendering: Circular dependencies cannot exist in RelativeLayout Exception details are logged in Window Show View Error Log I cant figure out why? here is my code <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.amandhapola.ribbit.LoginActivity" android:background="@drawable/background_fill" > <ImageView android:id="@+id/backgroundImage" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_alignParentLeft="true" android:layout_alignParentTop="true" android:scaleType="fitStart" android:src="@drawable/background" android:contentDescription="@string/content_desc_background"/> <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentTop="true" android:layout_above="@+id/subtitle" android:layout_centerHorizontal="true" android:textSize="60sp" android:layout_marginTop="32dp" android:textColor="@android:color/white" android:textStyle="bold" android:text="@string/app_name" /> <TextView android:id="@+id/subtitle" android:layout_centerHorizontal="true" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_below="@+id/title" android:layout_above="@+id/usernameField" android:textSize="13sp" android:textColor="@android:color/white" android:textStyle="bold" android:text="@string/subtitle"/> <EditText android:id="@+id/usernameField" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_above="@+id/passwordField" android:layout_below="@+id/subtitle" android:layout_alignParentLeft="true" android:ems="10" android:hint="@string/username_hint" /> <EditText android:id="@+id/passwordField" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/usernameField" android:layout_above="@+id/loginButton" android:layout_alignParentLeft="true" android:layout_marginBottom="43dp" android:ems="10" android:hint="@string/password_hint" android:inputType="textPassword" /> <Button android:id="@+id/loginButton" android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_below="@+id/passwordField" android:layout_above="@+id/signUpText" android:layout_alignParentLeft="true" android:text="@string/login_button_label" /> <TextView android:id="@+id/signUpText" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginTop="12dp" android:layout_centerHorizontal="true" android:textColor="@android:color/white" android:layout_below="@+id/loginButton" android:text="@string/sign_up_text" /> </RelativeLayout>

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >