Search Results

Search found 4936 results on 198 pages for 'unity 2d'.

Page 7/198 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Doing imagemagick like stuff in Unity (using a mask to edit a texture)

    - by Codejoy
    There is this tutorial in imagemagick http://www.imagemagick.org/Usage/masking/#masks I was wondering if there was some way to mimic the behavior (like cutting the image up based on a black image mask that turns image parts transparent... ) and then trim that image in game... trying to hack around with the webcam feature and reproduce some of the imagemagick opencv stuff in it in Unity but I am saddly unequipped with masks, shaders etc in unity skill/knowledge. Not even sure where to start.

    Read the article

  • Using Unity – Part 5

    - by nmarun
    In the previous article of the series, I talked about constructor and property (setter) injection. I wanted to write about how to work with arrays and generics in Unity in this blog, after seeing how lengthy this one got, I’ve decided to write about generics in the next one. This one will only concentrate on arrays. My Product4 class has the following definition: 1: public interface IProduct 2: { 3: string WriteProductDetails(); 4: } 5:  6: public class Product4 : IProduct 7: { 8: public string Name { get; set; } 9: public ILogger[] Loggers { get; set; } 10:  11: public Product4(string productName, ILogger[] loggers) 12: { 13: Name = productName; 14: Loggers = loggers; 15: } 16:  17: public string WriteProductDetails() 18: { 19: StringBuilder productDetails = new StringBuilder(); 20: productDetails.AppendFormat("{0}<br/>", Name); 21: for (int i = 0; i < Loggers.Count(); i++) 22: { 23: productDetails.AppendFormat("{0}<br/>", Loggers[i].WriteLog()); 24: } 25: 26: return productDetails.ToString(); 27: } 28: } The key parts are line 4 where we declare an array of ILogger and line 5 where-in the constructor passes an instance of an array of ILogger objects. I’ve created another class – FakeLogger: 1: public class FakeLogger : ILogger 2: { 3: public string WriteLog() 4: { 5: return string.Format("Type: {0}", GetType()); 6: } 7: } It’s implementation is the same as what we had for the FileLogger class. Coming to the web.config file, first add the following aliases. The alias for FakeLogger should make sense right away. ILoggerArray defines an array of ILogger objects. I’ll tell why we need an alias for System.String data type. 1: <typeAlias alias="string" type="System.String, mscorlib" /> 2: <typeAlias alias="ILoggerArray" type="ProductModel.ILogger[], ProductModel" /> 3: <typeAlias alias="FakeLogger" type="ProductModel.FakeLogger, ProductModel"/> Next is to create mappings for the FileLogger and FakeLogger classes: 1: <type type="ILogger" mapTo="FileLogger" name="logger1"> 2: <lifetime type="singleton" /> 3: </type> 4: <type type="ILogger" mapTo="FakeLogger" name="logger2"> 5: <lifetime type="singleton" /> 6: </type> Finally, for the real deal: 1: <type type="IProduct" mapTo="Product4" name="ArrayProduct"> 2: <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement,Microsoft.Practices.Unity.Configuration, Version=1.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"> 3: <constructor> 4: <param name="productName" parameterType="string" > 5: <value value="Product name from config file" type="string"/> 6: </param> 7: <param name="loggers" parameterType="ILoggerArray"> 8: <array> 9: <dependency name="logger2" /> 10: <dependency name="logger1" /> 11: </array> 12: </param> 13: </constructor> 14: </typeConfig> 15: </type> Here’s where I’m saying, that if a type of IProduct is requested to be resolved, map it to type Product4. Furthermore, the Product4 has two constructor parameters – a string and an array of type ILogger. You might have observed the first parameter of the constructor is named ‘productName’ and that matches the value in the name attribute of the param element. The parameterType of ‘string’ maps to ‘System.String, mscorlib’ and is defined in the type alias above. The set up is similar for the second constructor parameter. The name matches the name of the parameter (loggers) and is of type ILoggerArray, which maps to an array of ILogger objects. We’ve also decided to add two elements to this array when unity resolves it – an instance of FileLogger and one of FakeLogger. The click event of the button does the following: 1: //unityContainer.RegisterType<IProduct, Product4>(); 2: //IProduct product4 = unityContainer.Resolve<IProduct>(); 3: IProduct product4 = unityContainer.Resolve<IProduct>("ArrayConstructor"); 4: productDetailsLabel.Text = product4.WriteProductDetails(); It’s worth mentioning here about the change in the format of resolving the IProduct to create an instance of Product4. You cannot use the regular way (the commented lines) to get an instance of Product4. The reason is due to the behavior of Unity which Alex Ermakov has brilliantly explained here. The corresponding output of the action is: You have a couple of options when it comes to adding dependency elements in the array node. You can: - leave it empty (no dependency elements declared): This will only create an empty array of loggers. This way you can check for non-null condition, in your mock classes. - add multiple dependency elements with the same name 1: <param name="loggers" parameterType="ILoggerArray"> 2: <array> 3: <dependency name="logger2" /> 4: <dependency name="logger2" /> 5: </array> 6: </param> With this you’ll see two instances of FakeLogger in the output. This article shows how Unity allows you to instantiate objects with arrays. Find the code here.

    Read the article

  • Polygons vs sprites rendering performance in Unity for windows phone 8

    - by Géry Arduino
    I'm currently building a windows phone 8 game with unity, having 111 (no more no less) sprites being updated each frames. I have a strong overhead in the profiler (70% to 90% minimum) I tried the following to get higher frame rate, I'm running it with minimum quality settings, I tried disabling and enabling V-Sync Finally I managedto get 60Fps, but I still have large overhead. I believe I should have more than 60Fps for such few amount. Moreover, I still have to implement the game logic over this so I'd like some room in my FPS to be able to work. I was wondering if it would be better in terms of performance to use polygons instead of sprites? As sprites are quite new in Unity, (that would give me around 222 triangles). Did someone tried to check the performance differences between sprites and actual mesh renderes in Unity when it comes to phones? If so what could be the best option in that case? FYI : I'm using the Windows Phone 8 emulator on Visual studio, I have a compliant computer for that so it should normally reflect the behavior of a real phone (expecting some differences but still...) EDIT : To clarify my question i wonder what is the most efficient in windows phone 8 : Sprites or Mesh renderers?

    Read the article

  • Emulator PCSX Reloaded - Fullscreen not working on Unity

    - by Leonardo Montenegro
    I have an older PS1 console with a couple of games I bought some years ago. On my pc, I'm using PCSX Reloaded - the best PS1 emulators for Linux I found so far. But I'm having a little issue on Ubuntu 12.04 Precise. I'm using Unity 3D and I'm trying to run some of my original PS1 games on PCSX Reloaded. Everything works nicely, except for fullscreen. I toggle fullscreen and specify maximum resolution for my monitor, but on fullscreen mode, both left and top unity bars aren't getting hidden. I tried changing between other graphic modes like Gnome Classic and Gnome Classic w/o Effects. On both, PCSX shows bars in fullscreen mode, so it isn't an Unity-specific issue, but an emulator problem. It's a bit annoying play games this way, so basically I'm running games on window mode for now. I'm using default OpenGL graphic plugin on this emulator. I tried changing to X11 graphic plugin and fullscreen worked, but graphics on X11 plugin aren't as good as OpenGL one. Anyone know a way to get fullscreen working on PCSX using OpenGL plugin? Or maybe another graphic plugin w/ OpenGL support.

    Read the article

  • C# and Unity - Learning to Develop a game by developing the game I want to develop

    - by 97s
    So I am pretty new to C#, I have some python and javascript experience, but nothing substantial. I have read a lot about C# and Unity and I know they are the tools I want to use. My question is: Should I be reading books about C# or should I just start hacking in unity and piecing the game together part by part? Right now I am going through the book, HeadFirst C#, and it is very good, but I taught myself web design and javascript by just creating and hacking until I got the results I wanted then looked at other code to see ways they did it and improved my code. The issue is that with the browser I got immediate results and it was all under one roof, where developing games is a completely different monster. I am just wondering if my time would be better spent buying a book that uses C# to teach you unity, and doing that instead, or if the time spent in HeadFirst book is going to be valuable. Thanks a ton, I am having difficulties using my time, and I just want to maximize it as I don't have a lot of free time. Edit: Hopefully this isn't to broad? If it is, I will delete and go elsewhere just let me know. Thanks.

    Read the article

  • Unity throws SynchronizationLockException while debugging

    - by pjohnson
    I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.

    Read the article

  • How to make Unity 3D work with Bumblebee using the Intel chipset

    - by EboMike
    I have a Sony VAIO S laptop with the dreaded Optimus and finally managed to get Bumblebee to work fully on Ubuntu 12.04 so that I can utilize both the hardware acceleration of the Intel chipset as well as the Nvidia one via optirun and/or bumble-app-settings. However, the desktop effects don't work. But they should, I vaguely remember that they worked for a while before I had Bumblebee installed. This is what I get with the support test: :~$ /usr/lib/nux/unity_support_test -p Xlib: extension "NV-GLX" missing on display ":0". OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 1.4 (2.1 Mesa 8.0.2) Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: no GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no First of all, I kind of doubt that the chipset doesn't support VBOs (essentially a standard feature in GL). Neither Xorg.0.log nor Xorg.8.log show any particular errors. As for the Nvidia drivers: In order to get them to work, I had to install the 304.22 drivers (older ones wouldn't work). They clobbered libglx.so, so I reinstated the xserver-xorg-core libglx.so in its original place, moved Nvidia's libglx.so to an nvidia-specific folder and specified that folder in the bumblebee.config. That seems to work and shouldn't cause the problem I see here. For fun, I tried to use the Nvidia chipset for Unity, but that didn't fly either: ~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 640M LE/PCIe/SSE2 OpenGL version string: 4.2.0 NVIDIA 304.22 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no

    Read the article

  • Unity Greeter login screen cuts off login options

    - by ammianus
    I have a pretty newly installed Ubuntu 12.04, using Unity. My external monitor is 1920x1080 max resolution. In the Unity desktop itself everything looks great. I have an NVidia graphics card. When I start my computer and get to the Unity greeter login screen the display is oddly formatted and the resolution seems off. It looks like a zoomed view on the larger 1920x1080 screen. As such it crops the login options off to the left hand side of the screen. So I can only just see the edge of the password box for the user I want to log in with. I can log in with one account by default by blindly typing the password, but I am unable to switch to other accounts. Is there anything I can do to fix the log in screen display so that I can see the normal login options? Note: I first noticed it when I changed my desktop background and the next time I logged in I saw the issue.

    Read the article

  • White screen with pointer after removing Unity

    - by Sameer Pandit
    I have the same problem . I am a newbie. I added the repository with sudo add apt-get-repository ppa:canonical-dx-team/une then i went to ubuntu software center and installed "unity interface of ubuntu netbook edition" . after installing i found a problem with User interface as it kept on flashing when mouse points to side panel . so i decided to remove it . I removed it form Ubuntu software center . there were other unity related apps installed , but i did not remove then as i had no idea what they were about . Now i ended up with a blank white screen with mouse pointer whenever i login. though i m able to login using gdm , but the screen is blank white. I tried to these commands also sudo apt-get remove gnome-shell sudo apt-get remove unity sudo restart gdm but they did not work at all i also tried sudo dpkg-reconfigure xserver-xorg it too did not work. Note:I donot have any sort of graphics card or video card on my pc please help !!!

    Read the article

  • Distributed Rendering in the UDK and Unity

    - by N0xus
    At the moment I'm looking at getting a game engine to run in a CAVE environment. So far, during my research I've seen a lot of people being able to get both Unity and the Unreal engine up and running in a CAVE (someone did get CryEngine to work in one, but there is little research data about it). As of yet, I have not cemented my final choice of engine for use in the next stage of my project. I've experience in both, so the learning curve will be gentle on both. And both of the engines offer stereoscopic rendering, either already inbuilt with ReadD (Unreal) or by doing it yourself (Unity). Both can also make use of other input devices as well, such as the kinect or other devices. So again, both engines are still on the table. For the last bit of my preliminary research, I was advised to see if either, or both engines could do distributed rendering. I was advised this, as the final game we make could go into a variety of differently sized CAVEs. The one I have access to is roughly 2.4m x 3m cubed, and have been duly informed that this one is a "baby" compared to others. So, finally onto my question: Can either the Unreal Engine, or Unity Engine make it possible for developers to allow distributed rendering? Either through in built devices, or by creating my own plugin / script?

    Read the article

  • Unable to use Maya animation with scripts when imported to Unity

    - by keshk
    I am testing to import Maya animation over to Unity. I set up a simple cylinder with 2 bones and an IK handle. Made a simple animation where the cylinder bends and goes back to straight position over 24 frames. Following that, I selected everything and baked, all bones,ik,(animation by selecting all at the graph editor) and even the cylinder. I saved the scene and then select all and export as FBX with animation and bake checked. In unity imported it and at the preview able to see the animation. When I load the model into scene and play (after assigning the controller), able to see animation too. But now when I try to script it and control the animation, nothing happens. Even to test, I tried the following under the Update method. if(animation.isPlaying) Debug.Log("Animation Works"); else Debug.Log("Animation not working"); The bool doesn't even return true nor false. My animation is called "bend", thus just for try I did the following and nothing happens. animation.Play("bend"); Can please advice based on my steps, am I missing something. Do I need to add the controller or is that an unnecessary step? Did I screw up on the Maya part or the Unity part. Thanks for help.

    Read the article

  • dual monitors and unity - ati radeon cards

    - by michiel
    I have a vaio laptop with an ATI Radeon video card and an external screen. I used to run dual monitors on Ubuntu 10.10 fine, but recently decided to upgrade to 11.10 via 11.04 I don't think it's the video card or the fglrx driver. It seems to be unity. When I start up, the laptop screen is normal and the external screen is all white, although I can move my mouse over it. However, the cursor becomes the big X that used to be cursor of the first versions of Xwindows. I can right click on it, and it brings up the context menu for the desktop. And then, all of a sudden, it shows my desktop background. I can continue to move my mouse over the external screen, and now the cursor is normal (little white arrow). But I can't do anything any longer (not even the context menu as before), and trying to drag a window to it (which always worked on 10.10) doesn't work. I actually really like unity. It gives me the most our of my desktop, and uses all space available, which is great. But how can I get my second screen back? I tried unity 2D, but the result is the same. Edit: I think I stumbled on this bug: https://bugs.launchpad.net/ubuntu/+source/nvidia-settings/+bug/882143

    Read the article

  • Ubuntu 13.10 Unity doesn't load after upgrade

    - by William
    Just upgraded to Ubuntu 13.10 only to find that Unity won't load (login freezes, after doing ctrl+alt+F1, logging in and then doing startx, I get a blank desktop and the mouse pointer, and nothing else). I can right click, but the only operations that work are "create new file" and "create new folder". For example, "change desktop background" doesn't work. Also, after doing a few right clicks and choosing "change desktop background", I get a warning message box: "compiz closed unexpectedly." Guest login works fine. Tried creating a new user, but I experience the same thing with the new user. Tried removing all configuration files from my home directory... same thing. Doing dconf reset -f /org/compiz/ gives an error "error spawning command line..." Doing unity --reset also gives errors. Tried uninstalling unity (and compiz) and reinstalling, but that doesn't help. Tried reconfiguring lightdm, didn't help. I don't have any proprietary drivers installed. Once again, the funny thing is that the guest session works fine.

    Read the article

  • I want to be able to use the unity menu with Citrix full screen

    - by porec
    I use Citrix Reciever at work, with both XenApp and XenDesktop. Many times at the same time. Since the unity Menu stil apeirs on the top anyway, I'd like to be able to use it. Now I can see it, but it doesn't work.. I have to either tab me out, (double clicking the ALT first)opening another program first, or move the mouse to the left, opening another program from the unity menu from the left, BEFORE I can use the menu on the top.. (my menu on the left side is in autohide mode, so I actually like it :)) For example. I use spotify for lisening to music, it apeirs on the top menu, but it doesn't react when it click it.. I have to move the mouse to the left, open another program, then move to the top an ask it to show spofity. If I open spofify from the left menu, it hangs.. (since its hidden, and I have to ask it to be open, not reopen the hole program..) Or If I want to lock the screen, I have to open another program, (i.ex. nixnote) before I can lock it) since the unity menu is "on the top" anyways, I don't see the problem that it should be able to control such things..

    Read the article

  • Unity gizmos vs. referenced game objects

    - by DuckMaestro
    I'm designing a Unity script that I intend to be highly reusable and as easy as possible to setup within the editor. To this end, a number of properties of this script really need some kind of visual representation on screen. It is an unresolved question to me whether the design of the script should require references to placeholder game objects, OR just Vector3's and float's that have associated gizmos drawn for them. Normally a gizmo would be a natural choice, except that Unity gizmos are not directly manipulable (as far as I can tell). Because of this shortcoming I'm having to consider whether depending on references to placeholder game objects is a more designer-friendly approach ultimately, in spite of the extra setup required, and that it might be counter-intuitive when the placeholder game objects disappear at run-time (which my script would do). Is there a community standard or preference here in this case? Can a Unity-experienced game programmer / designer speak to which approach they feel is more intuitive or more convenient to setup, when using a 3rd party script? Or is this just splitting hairs as long as I ship an example prefab with my script?

    Read the article

  • Which techniques to study?

    - by Djentleman
    Just to give you some background info, I'm studying a programming major at a tertiary level and am in my third year, so I'm not a newbie off the street. However, I am still quite new to game programming as a subset of programming. One of my personal projects for next semester is to design and create a 2D platformer game with emphasis on procedural generation and "neato" effects (think metroidvania). I've written up a list of some techniques to help me improve my personal skills (using XNA for the time being). The list is as follows: QuadTrees: Build a basic program in XNA that moves basic 2D sprites (circles and squares) around a set path and speed and changes their colour when they collide. Add functionality to add and delete objects of different sizes (select a direction and speed when adding and just drag and drop them in). Particles: Build a basic program in XNA in which you can select different colours and create particle effects of those colours on screen by clicking and dragging the mouse around (simple particles emerging from where the mouse is clicked). Add functionality where you can change the amount of particles to be drawn and the speed at which they travel and when they expire. Possibly implement gravity and wind after part 3 is complete. Physics: Build a basic program in XNA where you have a ball in a set 2D environment, a wind slider, and a gravity slider (can go to negative for reverse gravity). You can click to drag the ball around and release to throw it and, depending on what you do, the ball interacts with the environment. Implement other shapes afterwards. Random 2D terrain generation: Build a basic program in XNA that randomly generates terrain (including hills, caves, etc) created from 2D tiles. Add functionality that draws the tiles from a tileset and places different tiles depending on where they lie on the y-axis (dirt on top, then rock, then lava, etc). Randomised objects: Build a basic program in XNA that, when a button is clicked, displays a randomised item sprite based on parameters (type, colour, etc) with the images pulled from tilesets. Add the ability to save the item as an object, which stores it in a side-pane where it can be selected for viewing. Movement: Build a basic program in XNA where you can move an object around in an environment (tile-based) with a camera that pans with it. No gravity. Implement gravity and wind, allow the character to jump and fall with some basic platforms. So my question is this: Are there any other commonly used techniques that I should research, and can I get some suggestions as to the effectiveness of the techniques I've chosen to work on (e.g., don't do QuadTree stuff because [insert reason here], or, do [insert technique here] before you start working on particles because [insert reason here])? I hope this is clear enough and please let me know if I can further clarify anything!

    Read the article

  • Blank desktop after updates today, only unity2d works now [closed]

    - by NewUbuntuUser
    Possible Duplicate: Unity doesn't load, no Launcher, no Dash appears I have been using 12.04 (wubi) since a week now and this is my 1st exposure to linux. Everything was going on fine till now , but today as soon as I updated through the update manager , docky gave a message that compositing is required or something, and i got an error window which asked me to report the error and then the update manager asked me to reboot. After rebooting , i just get a blank desktop screen , no launcher, no toolbar. I have to restart with the power key. However if i login through Unity 2d everything is fine except the benefits of the 3d environment. I guess something got messed up after the update and i cant figure out which program or file caused this mess. I would highly appreciate if someone could help me out with this as I really liked working on ubuntu after windows 7. Thanks! SOLVED-- Thanks @jrg for the link provided.. it helped me to come out of this mess. Actually some update of compizconfig made the unity plugin inactive did something to the Animation add ons , the one which you use for the burn effect etc. What i did was : On the blank desktop Pressed keys Ctrl + Alt + T to bring up the terminal and typed ccsm This brought up the compiz config system manager, there i enabled the unity plugin and rebooted. Everything started working fine. But then again when i started the addons plugin ,everything went back to square one.. :( . This time pressing Ctrl + Alt + T also did not help. Then i tried Ctrl + Alt + F1 , it brought up a terminal or whatever u call it, and then i typed unity --reset , it did some resetting and in the end it showed some comositing done , I pressed Ctrl + Alt + F7 to come back to the desktop , and here it was , my old sweet desktop.. it had the animation effects too like wobbly windows, excepting the addons like the burn effect, i guess something got wrong with the last update of that addon plugin and now whenever i try turning that on , everything goes poof!! 6 hours wasted , but i guess i learnt something new.. not bad for an orthodontist i guess :))

    Read the article

  • 2D graphics - why use spritesheets?

    - by Columbo
    I have seen many examples of how to render sprites from a spritesheet but I havent grasped why it is the most common way of dealing with sprites in 2d games. I have started out with 2d sprite rendering in the few demo applications I've made by dealing with each animation frame for any given sprite type as its own texture - and this collection of textures is stored in a dictionary. This seems to work for me, and suits my workflow pretty well, as I tend to make my animations as gif/mng files and then extract the frames to individual pngs. Is there a noticeable performance advantage to rendering from a single sheet rather than from individual textures? With modern hardware that is capable of drawing millions of polygons to the screen a hundred times a second, does it even matter for my 2d games which just deal with a few dozen 50x100px rectangles? The implementation details of loading a texture into graphics memory and displaying it in XNA seems pretty abstracted. All I know is that textures are bound to the graphics device when they are loaded, then during the game loop, the textures get rendered in batches. So it's not clear to me whether my choice affects performance. I suspect that there are some very good reasons most 2d game developers seem to be using them, I just don't understand why.

    Read the article

  • 2D map/plane with nodes overlayed that supports panning, scaling and clicking on nodes

    - by garlicman
    I'm trying my hand at Android development and seem to be running into an invisible ceiling in trying to get what I want accomplished. Basically I'm trying to create an app that renders a 2D surface map that I can (pinch) zoom and pan. I'll have to place nodes on the surface of the map that will scale/zoom and pan in relation to the surface. I started out with a 2D ImageView approach and got as far as pinch zoom, pan and laying nodes as relative ImageViews, but all the methods I tried to get X,Y,W,H for the 2D surface were always off for some reason. Additionally, I was never able to scale the node ImageViews correctly, and as a result never got far enough to try and work out their X,Y scaled offset. So I decided to get back to 3D rendering. Conceptually pan/zoom is camera manipulation, so I don't have to mess with how to scale the 2D map or the nodes. But I need a starting point or sample to get me going that's close to what I'm trying to achieve. A sample on a translucent spinning cube isn't helping as much as I need it to. Any tips? Links, insults and sympathy are all welcome!

    Read the article

  • Unity 3D game idea for a fun and teaching game

    - by rasheeda
    I have been brainstorming for months now on writing a unity 3D game for my final year computer science project. I have been learning unity for sometime now but comming up with a concept is quite difficult than i thought. The game has to be really fun and also educational, one that the school and community can benefit from. I am thinking about a third person game. where the player runs round in an enviroment, picks up coins and earn points. This alone wouldn't earn me points and I have been trying to find new ideas of my own and all over the net but to no avail. Hopefully you guys can help me out with an idea, and how to make the lecturers appreciate the game and also make kids wanna play play it for a reason. Thanks.

    Read the article

  • Rotate to a set degree then reverse and repeat in Unity

    - by Ryan
    and thank you for your time. I'm making my first project in Unity, a simple game where touching objects adds points to the players score. I'd like the objects to have a pleasant back and forth swaying animation on the Z axis. Nodding to the right 30 degrees, then to the left 30 degrees, on and on. Here's what I've got... public class Rotator : MonoBehaviour { void Update () { transform.Rotate(new Vector3(0,0,12)*Time.deltaTime); } } This gives me a nice slow rotation. But I am clueless how to tell Unity to stop at +30 degrees, reverse to -30 degrees, rotate again to +30, stop and repeat, etc, etc. I'd really appreciate any help. Maybe there is a thread like this that I was not able to find? I assume it will involve some kind of 'if than' function? Thank you, Ryan

    Read the article

  • Upgrading to Gnome Shell 3.4 in Ubuntu 11.10 broke both Unity and Gnome shell

    - by mac
    I have upgraded my gnome shell to 3.4 in Ubuntu 11.10 through sudo add-apt-repository ppa:ricotz/testing sudo add-apt-repository ppa:gnome3-team/gnome3 sudo apt-get update && sudo apt-get dist-upgrade sudo apt-get install gnome-shell But it broke my system. Gnome shell is completely broken - When I login it just shows desktop wallpaper and nothing else. And importantly Unity is also broken. Attaching the screenshot Some main issues 1)Two menus are appearing now - Global menu as well as application menu 2)Icons on top-right panel are appearing weirdly 3)My Default Ambiance Theme also got screwed. Instead of black color menus, I am seeing white color menus. How do I fix them? Or Do I have an option to revert back to original settings or will reinstalling Unity/Gnome Shell helps ? Thanks

    Read the article

  • Unity very slow while Gnome Classic running just fine

    - by Sorin Sbarnea
    I see tons of people complaining about Unity speed and I think the problem is not with the video drivers. When I login to Gnome Classic the system is behaving just fine, but when on Unity I can barely do use it: windows are moved hard, terminal is damn slow. Is there any solution or bug that I should track? Details Ubuntu 11.10 Two monitors setup Latest Nvidia proprietary drivers (tested with default ones also, no change) 6GB RAM, Xeon @ 2.8 Nvidia Driver 280.13 - Quadro NVS 295 with 8 cores 256MB RAM. lspci | grep VGA 02:00.0 VGA compatible controller: nVidia Corporation G98 [Quadro NVS 295] (rev a1) uname -a Linux sorins 3.0.0-16-generic #29-Ubuntu SMP Tue Feb 14 12:48:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

    Read the article

  • Rendering of 2d water

    - by luke
    Suppose you have a nice way to move your 2D particles in order to simulate a fluid (like water). Any ideas on how to render it? Consider the fact that the game is a 2D game. The perspective is like this (the first image i have found): an example of 2d water. The water will be contained in boxes that can be broken in order to let it fall down and interact with other objects. The most simple way that comes to my mind is to use a small image for each particle. I am interested in hearing more ways of rendering water. Thank you.

    Read the article

  • Alt-Tab in Unity 2D makes app icon jiggle instead of switching

    - by itsadok
    I'm using Unity 2D as my desktop, and it works fine most of the time, but every now and then, I try to Alt-Tab to switch to a different application, and instead of switching to that window, the launcher bar opens, and the icon of the application I was trying to switch to starts jiggling. If I try to switch to something else, that icon starts jiggling too. If I click on the icon with the mouse, then it stops jiggling and switches to the requested window. What is this behavior? I haven't seen any mention of jiggling icons in Unity. I don't understand the original purpose of this, and I'd like a way to fix it so that I won't have to use the mouse.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >