Search Results

Search found 18794 results on 752 pages for 'graphics design'.

Page 65/752 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Keeping game model and graphics/animation separate but in sync

    - by AJM
    Suppose I'm building a chess game where I want to have animations. Pieces glide to their new squares when moved. Pieces perform attack animations when capturing other pieces. I'm not sure how to effectively separate the data and logic needed for these animations and the actual game model (in the MVC sense). The pieces themselves should ideally not have to worry about their pixel coordinates or current animation frame. At the same time, many changes to the model are effectively driven by animations. A moved piece changes its position after (before?) its sprite is done gliding. A piece is removed from the board after the capturing piece is finished its attack animation. How would you suggest I manage the game model, the graphics and animations, and their relationships? For example, where would the animations "live"? How would animations be created and managed in response to player moves? How would animations drive updates to the game model, or how would the game model drive animations?

    Read the article

  • NVIDIA Graphics - resolution problems with new 12.04 LTS installation

    - by Daveisuser56810
    I've been trying to install Ubuntu 12.04 LTS on my desktop most of the day. The desktop uses a NVIDIA GEFORCE 9800 (GT I think) graphics card. I am unable to set the correct resolution (1680 x 1050) for the display. The first problem I had was that of the "Black Screen" during install. I overcame this by utilising the "nomodeset" switch on the install options (once I'd found how to do that). The second problem of course was the "Black screen" following the first reboot. Once again this was overcome by using "nomodeset", this time by "editing" the GRUB. This gave me a resolution of 1280x768 which, the Displays GUI allowed me to change to 1280x720 (appears to fit on screen). I then tried to install the NVIDIA drivers. 1) using additional drivers 2) manually by downloading driver and installing in root As soon as NVIDIA drivers are installed - resolution become restricted to 640x480 (max). At this resolution Ubuntu GUI is not usable as most screens are larger than the display. Removing the NVIDIA driver and removing the XORG.CONF file does not lift this restriction. I have tried most things that I have found and that were vaguely intelligible, but nothing appears to get me closer to a resolution of 1680x1050. UPDATE: reinstalled Ubuntu 12-04 and used the "NoModeSet" in the Grub to restore the resolution to 1280x720, which is at least usable. Will live with this for now.

    Read the article

  • Restart and/or graphics problem in Ubuntu 12.04

    - by kara
    I having been using 12.04 for a couple of months now, with v. little problems. The other day I restarted my computer, and though I think it rebooted, the screen would be black. I could not even get a visual from a live cd. Finally, I was able to get it to load, but the resolution has been completely off. The computer thinks I have a laptop screen, when I actually have a ViewSonic VP2330wb, and it detects only two resolutions. And still, I have a problem with rebooting. If the screen locks after I leave it for a while, I can't get a visual back, and then when I force a shutdown, it takes 3 times for me to get a grub screen. Then I have to boot in recovery mode, and then finally in normal mode, but the screen is still always off. This is my video card: description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:fe000000-fe3fffff memory:d0000000-dfffffff ioport:f000(size=64) I am a new ubuntu user, and am at my wits end. Any help would be greatly appreciated.

    Read the article

  • Hybrid Graphics on Ubuntu 12.04 switching to discrete

    - by cfstras
    I have a Sony Vaio VPCCB-27FX with hybrid graphics. Using vgaswitcheroo enables me to switch my discrete card off to save power. Now when i want to switch to the discrete card for performance, my system freezes. I already tried logging out and killing x with service lightdm stop, but still, it freezes as soon as I echo DIS > switch. typing blindly, echo IGD > switch returns me to my console where it reads [ 179.555171] i915: switched off, but it seems the discrete card never gets switched on... running echo DDIS > switch gives me the following: [540....] [drm:atop_op_jump] *ERROR* atombios stuck in loop for more than 5secs aborting [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing CEE2 (len 62, WS 0, PS 0) @ 0xCEFE [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BBF6 (len 1036, WS 4, PS 0) @ 0xBCF3 [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BB8C (len 76, WS 0, PS 0) @ 0xBB94 [541....] [drm:r600_RING_TEST] *ERROR* radeon: ring test failed (scratch(0x8504)=0xFFFFFFFF) [541....] [drm:evergreen_resume] *ERROR* evergreen startup failed on resume after that, the atombios part repeats a few times. also, the terminal locks up again and sysrq+REISUB is my only rescue. Has anybody an idea how I can switch to my discrete card without the system locking up? #uname -srvmpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux #lsb_release -r Description: Ubuntu 12.04 LTS

    Read the article

  • AMD E-450 APU with HD-6320 graphics produces jerky videos

    - by user80424
    I try to make videos smooth playing on a Lenovo E325 laptop equipped with AMD E-450 APU. This processor have Ati HD-6320 GPU integrated. I installed ATI proprietary driver (Catalyst 12.04) as described here. Everything went fine and got no errors. However I can not play smooth HD videos. Almost every second frame has been dropped in VLC with hardware acceleration enabled. vainfo shows: libva: VA-API version 0.32.0 Xlib: extension "XFree86-DRI" missing on display ":0". libva: va_getDriverName() returns 0 libva: Trying to open /usr/lib/x86_64-linux-gnu/dri/fglrx_drv_video.so libva: va_openDriver() returns 0 vainfo: VA-API version: 0.32 (libva 1.0.15) vainfo: Driver version: Splitted-Desktop Systems XvBA backend for VA-API - 0.7.8 vainfo: Supported profile and entrypoints VAProfileH264High : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD fglrxinfo says: display: :0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: AMD Radeon HD 6320 Graphics OpenGL version string: 4.2.11631 Compatibility Profile Context and fgl_glxgears produces ~250fps. Why are HD video frames dropped? CPU doesn't goes above 50% during playback.

    Read the article

  • Ubuntu 14.04 install fails with Via S3 UniChrome Pro graphics

    - by WizardNo.7
    I am trying to install Ubuntu 14.04 on a Fujitsu Siemens Amilo Pro laptop(it's quite old yes, has about 30GB Hard Drive and I think 192mb of RAM) which currently has Windows XP installed (which I'd like to keep for the time being). I have downloaded the 32-bit Desktop ISO and used unetbootin to create a Live USB for this laptop. When I boot from USB, I arrive at the unetbootin Grey and Blue menu and pick either "Try Ubuntu without installing", or "Install Ubuntu". The menu goes away and an Ubuntu loadscreen showing UBUNTU and four dots which progressively change between white and orange. At about the second color changing cycle a white underscore symbol appears next to the fourth dot and flickers. There is some leftover text from the kernel boot still visible, but there is no graphical desktop. After this I have to hard reboot or shut-down. $ lshw -c video *-display UNCLAIMED description: VGA compatible controller product: CN700/P4M800 Pro/P4M800 CE/VN800 Graphics [S3 UniChrome Pro] Vendor: VIA Technologies, Inc. physical id: 0 bus info: pci@0000:01:00.0 version: 01 width: 32 bits clock: 66MHz capabilities: pm agp agp-2.0 vga_controller bus_master cap_list configuration: latency=64 mingnt=2 resources: memory:f0000000-f3ffffff memory:d1000000-d1ffffff

    Read the article

  • Rendering 8 bit graphics

    - by Matjaz Muhic
    I have a strong programming background just not from game development. I only made some pong and snake in high school and I did some OpenGL in college. I want to make my own game engine. Nothing fancy just a simple 2D game engine. But because I'm kinda old school and feeling retro. I want graphics to look like old 8 bit games (megaman, contra, super mario, ...). So how were the old games made back then? I want the simplest approach. Were they also using assets (images) like newer engines now do? How do you achieve this kind of rendering using OpenGL? Keep in mind. Simplest solution. I want to know how it was made back then and how I can replicate that. Doesn't even have to be OpenGL. I can draw on window canvas. I do want to make it from scratch basically.

    Read the article

  • Objective-C wrapper API design methodology

    - by Wade Williams
    I know there's no one answer to this question, but I'd like to get people's thoughts on how they would approach the situation. I'm writing an Objective-C wrapper to a C library. My goals are: 1) The wrapper use Objective-C objects. For example, if the C API defines a parameter such as char *name, the Objective-C API should use name:(NSString *). 2) The client using the Objective-C wrapper should not have to have knowledge of the inner-workings of the C library. Speed is not really any issue. That's all easy with simple parameters. It's certainly no problem to take in an NSString and convert it to a C string to pass it to the C library. My indecision comes in when complex structures are involved. Let's say you have: struct flow { long direction; long speed; long disruption; long start; long stop; } flow_t; And then your C API call is: void setFlows(flow_t inFlows[4]); So, some of the choices are: 1) expose the flow_t structure to the client and have the Objective-C API take an array of those structures 2) build an NSArray of four NSDictionaries containing the properties and pass that as a parameter 3) create an NSArray of four "Flow" objects containing the structure's properties and pass that as a parameter My analysis of the approaches: Approach 1: Easiest. However, it doesn't meet the design goals Approach 2: For some reason, this seems to me to be the most "Objective-C" way of doing it. However, each element of the NSDictionary would have to be wrapped in an NSNumber. Now it seems like we're doing an awful lot just to pass the equivalent of a struct. Approach 3: Seems the cleanest to me from an object-oriented standpoint and the extra encapsulation could come in handy later. However, like #2, it now seems like we're doing an awful lot (creating an array, creating and initializing objects) just to pass a struct. So, the question is, how would you approach this situation? Are there other choices I'm not considering? Are there additional advantages or disadvantages to the approaches I've presented that I'm not considering?

    Read the article

  • C# Design Questions

    - by guazz
    How to approach unit testing of private methods? I have a class that loads Employee data into a database. Here is a sample: public class EmployeeFacade { public Employees EmployeeRepository = new Employees(); public TaxDatas TaxRepository = new TaxDatas(); public Accounts AccountRepository = new Accounts(); //and so on for about 20 more repositories etc. public bool LoadAllEmployeeData(Employee employee) { if (employee == null) throw new Exception("..."); EmployeeRepository emps = new EmployeeRepository(); bool exists = emps.FetchExisting(emps.Id); if (!exists) { emps.AddNew(); } try { emps.Id = employee.Id; emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName; emps.SomeOtherAttribute; } catch() {} try { emps.Save(); } catch(){} try { LoadorUpdateTaxData(employee.TaxData); } catch() {} try { LoadorUpdateAccountData(employee.AccountData); } catch() {} ... etc. for about 20 more other employee objects } private bool LoadorUpdateTaxData(employeeId, TaxData taxData) { if (taxData == null) throw new Exception("..."); ...same format as above but using AccountRepository } private bool LoadorUpdateAccountData(employee.TaxData) { ...same format as above but using TaxRepository } } I am writing an application to take serialised objects(e.g. Employee above) and load the data to the database. I have a few design question that I would like opinions on: A - I am calling this class "EmployeeFacade" because I am (attempting?) to use the facade pattern. Is it good practace to name the pattern on the class name? B - Is it good to call the concrete entities of my DAL layer classes "Repositories" e.g. "EmployeeRepository" ? C - Is using the repositories in this way sensible or should I create a method on the repository itself to take, say, the Employee and then load the data from there e.g. EmployeeRepository.LoadAllEmployeeData(Employee employee)? I am aim for cohesive class and but this will requrie the repository to have knowledge of the Employee object which may not be good? D - Is there any nice way around of not having to check if an object is null at the begining of each method? E - I have a EmployeeRepository, TaxRepository, AccountRepository declared as public for unit testing purpose. These are really private enities but I need to be able to substitute these with stubs so that the won't write to my database(I overload the save() method to do nothing). Is there anyway around this or do I have to expose them? F - How can I test the private methods - or is this done (something tells me it's not)? G- "emps.Name = employee.EmployeeDetails.PersonalDetails.Active.Names.FirstName;" this breaks the Law of Demeter but how do I adjust my objects to abide by the law?

    Read the article

  • Design question for WinForms (C#) app, using Entity Framework

    - by cdotlister
    I am planning on writing a small home budget application for myself, as a learning excercise. I have built my database (SQL Server), and written a small console application to interact with it, and test out scenarios on my database. Once I am happy, my next step would be to start building the application - but I am already wondering what the best/standard design would be. I am palnning on using Entity Framework for handling my database entities... then linq to sql/objects for getting the data, all running under a WinForms (for now) application. My plan (I've never used EF... and most of my development background is Web apps) is to have my database... with Entity Framework in it's own project.. which has the connection to the database. This project would expose methods such as 'GetAccount()', 'GetAccount(int accountId)' etc. I'd then have a service project that references my EF project. And on top of that, my GUI project, which makes the calls to my service project. But I am stuck. Lets say I have a screen that displays a list of Account types (Debit, Credit, Loan...). Once I have selected one, the next drop down shows a list of accounts I have that suite that account type. So, my OnChange event on my DropDown on the account type control will make a call to the serviceLayer project, 'GetAccountTypes()', and I would expect back a List< of Account Types. However, the AccountType object ... what is that? That can't be the AccountType object from my EF project, as my GUI project doesn't have reference to it. Would I have to have some sort of Shared Library, shared between my GUI and my Service project, with a custom built AccountType object? The GUI can then expect back a list of these. So my service layer would have a method: public List<AccountType> GetAccountTypes() That would then make a call to a custom method in my EF project, which would probably be the same as the above method, except, it returns an list of EF.Data.AccountType (The Entity Framework generated Account Type object). The method would then have the linq code to get the data as I want it. Then my service layer will get that object, and transform it unto my custom AccountType object, and return it to the GUI. Does that sound at all like a good plan?

    Read the article

  • Design pattern question: encapsulation or inheritance

    - by Matt
    Hey all, I have a question I have been toiling over for quite a while. I am building a templating engine with two main classes Template.php and Tag.php, with a bunch of extension classes like Img.php and String.php. The program works like this: A Template object creates a Tag objects. Each tag object determines which extension class (img, string, etc.) to implement. The point of the Tag class is to provide helper functions for each extension class such as wrap('div'), addClass('slideshow'), etc. Each Img or String class is used to render code specific to what is required, so $Img->render() would give something like <img src='blah.jpg' /> My Question is: Should I encapsulate all extension functionality within the Tag object like so: Tag.php function __construct($namespace, $args) { // Sort out namespace to determine which extension to call $this->extension = new $namespace($this); // Pass in Tag object so it can be used within extension return $this; // Tag object } function render() { return $this->extension->render(); } Img.php function __construct(Tag $T) { $args = $T->getArgs(); $T->addClass('img'); } function render() { return '<img src="blah.jpg" />'; } Usage: $T = new Tag("img", array(...); $T->render(); .... or should I create more of an inheritance structure because "Img is a Tag" Tag.php public static create($namespace, $args) { // Sort out namespace to determine which extension to call return new $namespace($args); } Img.php class Img extends Tag { function __construct($args) { // Determine namespace then call create tag $T = parent::__construct($namespace, $args); } function render() { return '<img src="blah.jpg" />'; } } Usage: $Img = Tag::create('img', array(...)); $Img->render(); One thing I do need is a common interface for creating custom tags, ie I can instantiate Img(...) then instantiate String(...), I do need to instantiate each extension using Tag. I know this is somewhat vague of a question, I'm hoping some of you have dealt with this in the past and can foresee certain issues with choosing each design pattern. If you have any other suggestions I would love to hear them. Thanks! Matt Mueller

    Read the article

  • JavaScript Resource Management Design Pattern

    - by Adam
    As a web developer, a common problem I find myself tackling is waiting for something to load before doing something else. In particular, I often hide (using either display: none; or visibility: hidden; depending on the situation) elements while waiting for a background image or a CSS file to load. Consider this example from Last.FM. They overlay a semi-transparant PNG over each album art image so that it looks like it's inside a jewel-case. They let it load when it loads, so depending on your internet speed, you may see the art image by itself (without the overlay) temporarily. In this case, the album art looks fine without the jewel-case effect. But in similar situations, I have found that I don't want the user to see the site's design mangled as resources incrementally load. So, in rare cases I have hidden everything from the user until the whole kit and kaboodle has loaded. But this is often a pain to write out, and may force the user to wait for a pretty long time to see anything (besides "loading..." text). I can think of (and have used on occasion) some obvious solutions/compromises: Use some inline CSS so that as certain parts of the DOM load and render, they will immediately have the correct size/position/etc. Immediately render the navigation part of the site, so that if the user wanted to use the current page purely to get somewhere else, they don't have to wait for the rest to load. Load pixelated images first as placeholders for layout while lazy-loading higher quality images as replacements. Something quirky like using a cute animated gif to distract the user during a "loading..." phase. Show useful information as a reference while loading the full UI. (Something akin to Gmail Inbox Preview, etc.) (Sorry if my question was basically just asked and answered...) Despite all of these ideas, I still find myself hoping there are better ways of doing some of these things. So I guess what I'm looking for is some inspiration and/or any creative ways of dealing with this problem that you guys may have seen out in the wild.

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

  • How to design a character animation system?

    - by Andrea Benedetti
    I'm searching for suggestions and resources on the possible ways to design a character animation system. I mean a system built on top of the graphics engine (as graphics engine I use Ogre3D, that provide an animation layer), and in strict contact with the logic of the game. It's for a sports title, so the question is not easy. Edit: What I'm searching for are suggestions and resources about the action state mechines (or animation state machines), that is build on top of the animation pipeline already provided by the graphics engine. So, a state-driver animation interface for use by virtually all higher-level game code.

    Read the article

  • What is good practice in .NET system architecture design concerning multiple models and aggregates

    - by BuzzBubba
    I'm designing a larger enterprise architecture and I'm in a doubt about how to separate the models and design those. There are several points I'd like suggestions for: - models to define - way to define models Currently my idea is to define: Core (domain) model Repositories to get data to that domain model from a database or other store Business logic model that would contain business logic, validation logic and more specific versions of forms of data retrieval methods View models prepared for specifically formated data output that would be parsed by views of different kind (web, silverlight, etc). For the first model I'm puzzled at what to use and how to define the mode. Should this model entities contain collections and in what form? IList, IEnumerable or IQueryable collections? - I'm thinking of immutable collections which IEnumerable is, but I'd like to avoid huge data collections and to offer my Business logic layer access with LINQ expressions so that query trees get executed at Data level and retrieve only really required data for situations like the one when I'm retrieving a very specific subset of elements amongst thousands or hundreds of thousands. What if I have an item with several thousands of bids? I can't just make an IEnumerable collection of those on the model and then retrieve an item list in some Repository method or even Business model method. Should it be IQueryable so that I actually pass my queries to Repository all the way from the Business logic model layer? Should I just avoid collections in my domain model? Should I void only some collections? Should I separate Domain model and BusinessLogic model or integrate those? Data would be dealt trough repositories which would use Domain model classes. Should repositories be used directly using only classes from domain model like data containers? This is an example of what I had in mind: So, my Domain objects would look like (e.g.) public class Item { public string ItemName { get; set; } public int Price { get; set; } public bool Available { get; set; } private IList<Bid> _bids; public IQueryable<Bid> Bids { get { return _bids.AsQueryable(); } private set { _bids = value; } } public AddNewBid(Bid newBid) { _bids.Add(new Bid {.... } } Where Bid would be defined as a normal class. Repositories would be defined as data retrieval factories and used to get data into another (Business logic) model which would again be used to get data to ViewModels which would then be rendered by different consumers. I would define IQueryable interfaces for all aggregating collections to get flexibility and minimize data retrieved from real data store. Or should I make Domain Model "anemic" with pure data store entities and all collections define for business logic model? One of the most important questions is, where to have IQueryable typed collections? - All the way from Repositories to Business model or not at all and expose only solid IList and IEnumerable from Repositories and deal with more specific queries inside Business model, but have more finer grained methods for data retrieval within Repositories. So, what do you think? Have any suggestions?

    Read the article

  • Looking for a better design: A readonly in-memory cache mechanism

    - by Dylan Lin
    Hi all, I have a Category entity (class), which has zero or one parent Category and many child Categories -- it's a tree structure. The Category data is stored in a RDBMS, so for better performance, I want to load all categories and cache them in memory while launching the applicaiton. Our system can have plugins, and we allow the plugin authors to access the Category Tree, but they should not modify the cached items and the tree(I think a non-readonly design might cause some subtle bugs in this senario), only the system knows when and how to refresh the tree. Here are some demo codes: public interface ITreeNode<T> where T : ITreeNode<T> { // No setter T Parent { get; } IEnumerable<T> ChildNodes { get; } } // This class is generated by O/R Mapping tool (e.g. Entity Framework) public class Category : EntityObject { public string Name { get; set; } } // Because Category is not stateless, so I create a cleaner view class for Category. // And this class is the Node Type of the Category Tree public class CategoryView : ITreeNode<CategoryView> { public string Name { get; private set; } #region ITreeNode Memebers public CategoryView Parent { get; private set; } private List<CategoryView> _childNodes; public IEnumerable<CategoryView> ChildNodes { return _childNodes; } #endregion public static CategoryView CreateFrom(Category category) { // here I can set the CategoryView.Name property } } So far so good. However, I want to make ITreeNode interface reuseable, and for some other types, the tree should not be readonly. We are not able to do this with the above readonly ITreeNode, so I want the ITreeNode to be like this: public interface ITreeNode<T> { // has setter T Parent { get; set; } // use ICollection<T> instead of IEnumerable<T> ICollection<T> ChildNodes { get; } } But if we make the ITreeNode writable, then we cannot make the Category Tree readonly, it's not good. So I think if we can do like this: public interface ITreeNode<T> { T Parent { get; } IEnumerable<T> ChildNodes { get; } } public interface IWritableTreeNode<T> : ITreeNode<T> { new T Parent { get; set; } new ICollection<T> ChildNodes { get; } } Is this good or bad? Are there some better designs? Thanks a lot! :)

    Read the article

  • How can I switch between the HDMI and DVI outputs of my graphics card?

    - by Owen Melbourne
    I've got an Nvidia GTX 560ti card which currently I've got my 2 monitors hooked up to using the 2 DVI ports. However its got a mini-HDMI port which I've plugged a HDMI cable in (with mini adapter) and lead it into my TV which is across the room. I'm hoping to be able to toggle between the HDMI output and the DVI outputs, however I'm not sure how I'd go about this, Could somebody please point me in the right direction, I'm not really worried about having all 3 on at the same time so that isn't a problem, but if its possible then I'll do that.

    Read the article

  • Can I get dual monitor support (2xDVI) from my (DVI+HDMI) graphics card?

    - by nray
    It seems that pure 2 x DVI dual head video cards are becoming rare. Most cards feature something like 1 x DVI plus 1 x HDMI plus 1 x VGA or some other interface. The idea seems to be that you can just use an HDMI <= DVI adapter. One result is that cards are seldom marked " 2 x DVI " anymore, but does this mean that they support simultaneous output on all interfaces? Are all cards dual head these days? Take Asus's nVidia cards for example, they routinely have 1 x DVI plus 1 x HDMI instead of 2 x DVI, so my question is, are these equivalent to a dual head DVI card, or is there some detail required for dual monitor support? I use dual-monitor stretched desktops for digital signage projects.

    Read the article

  • Can I fully disable my PCIe Video/Graphics Card per BIOS/Software?

    - by Jook
    Because of a quite noisy fan of my HD6700 I was wondering, if I could fully disable my video card through BIOS or even some Software/Windows. Switching to Intel I7 2600 internal video helped already with the noise, but it would be great to have the HD6700 only build in, but not activated/powered. So that the fan could stay compleatly off. Of course, I could just remove the video card, but I would like to avoid that. Is there any way? My Mainboard is an ASUS P8H67-M Pro, with Intel I7 2600 and ATI HD6500

    Read the article

  • Is it possible to update Lenovo T500 hybrid graphics driver?

    - by Shiki
    Hope someone had the same problem before. My laptop comes with two video "cards". Intel MHD4500 and ATI Radeon HD3650 (mobility). I can switch between them with the "Lenovo Battery Manager" from the taskbar. All good, but Lenovo provides the driver for this. The problem with that driver is they are a bit old. (Both Intel and ATI). For example I doubt I have any gpu accel in flash, and I experience some problem with dual display setup, which the new ATI driver would solve. Basically I want to create a new driver (I read a topic where a guy said its possible... baaack then when I was searching for something else). One needs a special ATI driver creation utility or what... sorry I can't recall the name of it. Thanks in advance. (Ah yes I forgot: The OS is a Windows 7 x64 Ultimate.)

    Read the article

  • How to tell, before buying, if a given graphics card will play Full HD video?

    - by Dominykas Mostauskis
    I am looking for the cheapest video card that would be capable of smooth playback of Full HD (1080p) video on a Full HD screen. An answer by @Mikhail on a related question briefly mentioned that: performance of video playback is largely dependent on the video accelerators present [in the card] Is this true? Could anyone expand on that? Are there any benchmarks or specifications that could be used to tell if a given (low-end) card can play Full HD video smoothly? Benchmarks I encountered are oriented towards computer games, and using them to evaluate video playback performance may be less-than-optimal, I imagine.

    Read the article

  • Signals and threads - good or bad design decision?

    - by Jens
    I have to write a program that performs highly computationally intensive calculations. The program might run for several days. The calculation can be separated easily in different threads without the need of shared data. I want a GUI or a web service that informs me of the current status. My current design uses BOOST::signals2 and BOOST::thread. It compiles and so far works as expected. If a thread finished one iteration and new data is available it calls a signal which is connected to a slot in the GUI class. My question(s): Is this combination of signals and threads a wise idea? I another forum somebody advised someone else not to "go down this road". Are there potential deadly pitfalls nearby that I failed to see? Is my expectation realistic that it will be "easy" to use my GUI class to provide a web interface or a QT, a VTK or a whatever window? Is there a more clever alternative (like other boost libs) that I overlooked? following code compiles with g++ -Wall -o main -lboost_thread-mt <filename>.cpp code follows: #include <boost/signals2.hpp> #include <boost/thread.hpp> #include <boost/bind.hpp> #include <iostream> #include <iterator> #include <string> using std::cout; using std::cerr; using std::string; /** * Called when a CalcThread finished a new bunch of data. */ boost::signals2::signal<void(string)> signal_new_data; /** * The whole data will be stored here. */ class DataCollector { typedef boost::mutex::scoped_lock scoped_lock; boost::mutex mutex; public: /** * Called by CalcThreads call the to store their data. */ void push(const string &s, const string &caller_name) { scoped_lock lock(mutex); _data.push_back(s); signal_new_data(caller_name); } /** * Output everything collected so far to std::out. */ void out() { typedef std::vector<string>::const_iterator iter; for (iter i = _data.begin(); i != _data.end(); ++i) cout << " " << *i << "\n"; } private: std::vector<string> _data; }; /** * Several of those can calculate stuff. * No data sharing needed. */ struct CalcThread { CalcThread(string name, DataCollector &datcol) : _name(name), _datcol(datcol) { } /** * Expensive algorithms will be implemented here. * @param num_results how many data sets are to be calculated by this thread. */ void operator()(int num_results) { for (int i = 1; i <= num_results; ++i) { std::stringstream s; s << "["; if (i == num_results) s << "LAST "; s << "DATA " << i << " from thread " << _name << "]"; _datcol.push(s.str(), _name); } } private: string _name; DataCollector &_datcol; }; /** * Maybe some VTK or QT or both will be used someday. */ class GuiClass { public: GuiClass(DataCollector &datcol) : _datcol(datcol) { } /** * If the GUI wants to present or at least count the data collected so far. * @param caller_name is the name of the thread whose data is new. */ void slot_data_changed(string caller_name) const { cout << "GuiClass knows: new data from " << caller_name << std::endl; } private: DataCollector & _datcol; }; int main() { DataCollector datcol; GuiClass mc(datcol); signal_new_data.connect(boost::bind(&GuiClass::slot_data_changed, &mc, _1)); CalcThread r1("A", datcol), r2("B", datcol), r3("C", datcol), r4("D", datcol), r5("E", datcol); boost::thread t1(r1, 3); boost::thread t2(r2, 1); boost::thread t3(r3, 2); boost::thread t4(r4, 2); boost::thread t5(r5, 3); t1.join(); t2.join(); t3.join(); t4.join(); t5.join(); datcol.out(); cout << "\nDone" << std::endl; return 0; }

    Read the article

  • Textures do not render on ATI graphics cards?

    - by Mathias Lykkegaard Lorenzen
    I'm rendering textured quads to an orthographic view in XNA through hardware instancing. On Nvidia graphics cards, this all works, tested on 3 machines. On ATI cards, it doesn't work at all, tested on 2 machines. How come? Culling perhaps? My orthographic view is set up like this: Matrix projection = Matrix.CreateOrthographicOffCenter(0, graphicsDevice.Viewport.Width, -graphicsDevice.Viewport.Height, 0, 0, 1); And my elements are rendered with the Z-coordinate 0. Edit: I just figured out something weird. If I do not call this spritebatch code above doing my textured quad rendering code, then it won't work on Nvidia cards either. Could that be due to culling information or something like that? Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone); ... spriteBatch.End(); Edit 2: Here's the full code for my instancing call. public void DrawTextures() { Batch.Instance.SpriteBatch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullNone, textureEffect); while (texturesToDraw.Count > 0) { TextureJob texture = texturesToDraw.Dequeue(); spriteBatch.Draw(texture.Texture, texture.DestinationRectangle, texture.TintingColor); } spriteBatch.End(); #if !NOTEXTUREINSTANCING // no work to do if (positionInBufferTextured > 0) { device.BlendState = BlendState.Opaque; textureEffect.CurrentTechnique = textureEffect.Techniques["Technique1"]; textureEffect.Parameters["Texture"].SetValue(darkTexture); textureEffect.CurrentTechnique.Passes[0].Apply(); if ((textureInstanceBuffer == null) || (positionInBufferTextured > textureInstanceBuffer.VertexCount)) { if (textureInstanceBuffer != null) textureInstanceBuffer.Dispose(); textureInstanceBuffer = new DynamicVertexBuffer(device, texturedInstanceVertexDeclaration, positionInBufferTextured, BufferUsage.WriteOnly); } if (positionInBufferTextured > 0) { textureInstanceBuffer.SetData(texturedInstances, 0, positionInBufferTextured, SetDataOptions.Discard); } device.Indices = textureIndexBuffer; device.SetVertexBuffers(textureGeometryBuffer, new VertexBufferBinding(textureInstanceBuffer, 0, 1)); device.DrawInstancedPrimitives(PrimitiveType.TriangleStrip, 0, 0, textureGeometryBuffer.VertexCount, 0, 2, positionInBufferTextured); // now that we've drawn, it's ok to reset positionInBuffer back to zero, // and write over any vertices that may have been set previously. positionInBufferTextured = 0; } #endif }

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >