Search Results

Search found 23925 results on 957 pages for 'multiple render targets'.

Page 30/957 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Need efficient way to keep enemy from getting hit multiple times by same source

    - by TenFour04
    My game's a simple 2D one, but this probably applies to many types of scenarios. Suppose my player has a sword, or a gun that shoots a projectile that can pass through and hit multiple enemies. While the sword is swinging, there is a duration where I am checking for the sword making contact with any enemy on every frame. But once an enemy is hit by that sword, I don't want him to continue getting hit over and over as the sword follows through. (I do want the sword to continue checking whether it is hitting other enemies.) I've thought of a couple different approaches (below), but they don't seem like good ones to me. I'm looking for a way that doesn't force cross-referencing (I don't want the enemy to have to send a message back to the sword/projectile). And I'd like to avoid generating/resetting multiple array lists with every attack. Each time the sword swings it generates a unique id (maybe by just incrementing a global static long). Every enemy keeps a list of id's of swipes or projectiles that have already hit them, so the enemy knows not to get hurt by something multiple times. Downside is that every enemy may have a big list to compare to. So projectiles and sword swipes would have to broadcast their end-of-life to all enemies and cause a search and remove on every enemy's array list. Seems kind of slow. Each sword swipe or projectile keeps its own list of enemies that it has already hit so it knows not to apply damage. Downsides: Have to generate a new list (probably pull from a pool and clear one) every time a sword is swung or a projectile shot. Also, this breaks down modularity, because now the sword has to send a message to the enemy, and the enemy has to send a message back to the sword. Seems to me that two-way streets like this are a great way to create very difficult-to-find bugs.

    Read the article

  • Single complex or multiple simple autoload functions [on hold]

    - by Tyson of the Northwest
    Using the spl_autoload_register(), should I use a single autoload function that contains all the logic to determine where the include files are or should I break each include grouping into it's own function with it's own logic to include the files for the called function? As the places where include files may reside expands so too will the logic of a single function. If I break it into multiple functions I can add functions as new groupings are added, but the functions will be copy/pastes of each other with minor alterations. Currently I have a tool with a single registered autoload function that picks apart the class name and tries to predict where it is and then includes it. Due to naming conventions for the project this has been pretty simple. if has namespace if in template namespace look in Root\Templates else look in Root\Modules\Namespace else look in Root\System if file exists include But we are starting to include Interfaces and Traits into our codebase and it hurts me to include the type of a thing in it's name. So we are looking at instead of a single autoload function that digs through the class name and looks for the file and has increasingly complex logic to it, we are looking at having multiple autoload functions registered. But each one follows the same pattern and any time I see that I get paranoid about code copying. function systemAutoloadFunc logic to create probable filename if filename exists in system include it and return true else return false function moduleAutoloadFunc logic to create probable filename if filename exists in modules include it and return true else return false Every autoload function will follow that pattern and the last of each function if filename exists, include return true else return false is going to be identical code. This makes me paranoid about having to update it later across the board if the file_exists include pattern we are using ever changes. Or is it just that, paranoia and the multiple functions with some identical code is the best option?

    Read the article

  • How should VertexBuffers be used with Multiple Monitors in DirectX 9

    - by Joshua C
    I am currently using DirectX 9 on a machine with two GPUs and three monitors. I am currently trying to draw a triangle on each monitor using vertexbuffers; A directx helloworld with multiple monitors if you will. I am familiar with some DirectX coding, but new to multiple monitor DirectX coding. I may be going about this the wrong way, so please do correct me if I'm doing something wrong. I have created a Direct3D Device for each enumerated adapter sharing the same Form handle. This allows me to successfully use all three monitors in full-screen mode. For Each Adapter In Direct3D.Adapters Dim PresentParameters As New PresentParameters 'Setup PresentParameters PresentParameters.Windowed = False PresentParameters.DeviceWindowHandle = MainForm.Handle Dim Device as New Device(Direct3D, Adapter.Adapter, DeviceType.Hardware, PresentParameters.DeviceWindowHandle, CreateFlags.HardwareVertexProcessing, PresentParameters) Device.SetRenderState(RenderState.Lighting, False) Devices.Add(Device) Next I can also draw text to each device successfully using a different Font for each Device. When I render a triangle using a different VertexBuffer for each Device, only two monitors display the triangle. One of the two monitors on the same GPU, and the monitor on it's own GPU display properly. VertexBuffer = New VertexBuffer(Device, 4 * Marshal.SizeOf(GetType(ColoredVertex)), Usage.WriteOnly, VertexFormat.None, Pool.Managed) Dim Verts = VertexBuffer.Lock(0, 0, LockFlags.None) Verts.WriteRange({ New ColoredVertex(-.5, -.5, 1, ForeColor), New ColoredVertex(0, .5, 1, ForeColor), New ColoredVertex(.5, -.5, 1, ForeColor) }) VertexBuffer.Unlock() VertexDeclaration = New VertexDeclaration(Device, { New VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Position, 0), New VertexElement(0, 12, DeclarationType.Color, DeclarationMethod.Default, DeclarationUsage.Color, 0), VertexElement.VertexDeclarationEnd }) Render Code: Device.SetStreamSource(0, VertexBuffer, 0, Marshal.SizeOf(GetType(ColoredVertex))) Device.VertexDeclaration = VertexDeclaration Device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1) I have to assume the fact that they share the same physical card comes into play. Should I use multiple buffers on the same card, and if so, how? Or what is the way I should access the VertexBuffer across Devices? Another thought I had was the non working monitor acts like there are no lights. Is turning off lighting on each device on the same card causing issues somehow?

    Read the article

  • Understanding asp.net mvc IView and IView.Render

    - by Harpreet
    I'm trying to devise a method of doing VERY simple asp.net mvc plugins but mostly I'm trying to understand how view rendering works. I've distilled my problem down to this public class CustomView : IView { public void Render(ViewContext viewContext, TextWriter writer) { writer.Write( /* string to render */); } } Now, within that write method I can render any string to the view but when I put a line of code in there wrapped with <% % it renders the code to the view literally rather than parsing it and executing it. I've tried adding things like <% @Page ... to the beginning of the string and it just renders that literally as well. Among many attempts I'm currently calling it this way within a controller action: ... CustomView customView = new CustomView(); ViewResult result = new ViewResult(); result.View = customView; result.ViewName = "Index.aspx"; result.MasterName = ""; return result; What am I missing or doing wrong that this won't work? The ViewResult seems to have the WebFormViewEngine in its ViewEngines collection. I just want to understand this and after stripping it down to what I think should be the minimum it doesn't behave as I think it should. I'm guessing some other part of the machinery is involved/missing but I can't figure out what.

    Read the article

  • "The image <name> cannot be displayed because it contains errors" when using pchart Render method

    - by christophe-milard
    Hi, I am trying to use the pchart package (over php) to build (and directly display) graphs/charts. At this time, I am just trying to run their provided example (Example1.php), where I just have replaced the final: $Test-Render("example1.png"); by $Test-Stroke(); But When I do this, I get:" The image cannot be displayed because it contains errors" on the browser. If I leave the original "$Test-Render(...)" the generated image is OK. (but not sent) I have read that there is (was?) an issue with mozilla/Firefox browsers regarding images being required twice and the REFER URL, but when I browse at the pchart home page, I can use their "sanboxes" and get the result of my tests directly displayed on my browser (http://pchart.sourceforge.net/demo.php). ... So their must be a way (or a nice work around) to send the generated graphs directely to the browser successfuly. If your answer is to generate the image (i.e. use Render) and then send it afterwards, please but accurate on how to do this (how do I destroy the generated files automaticaly, permissions...) I am new to this, sorry advance if it's obvious...;-)

    Read the article

  • Rails 2.3.2 trying to render ERB instead of HAML

    - by c00lryguy
    Rails is suddenly trying to render ERB instead of Haml and I can't figure out why. I've created new rails projects, reinstalled Haml, and reinstalled Rails. Here's exactly the steps I take when making my application (Rails 2.3.2): rails> rails test rails> cd test rails\test> haml --rails . rails\test> ruby script\generate model user email:string password:string rails\test> ruby script\generate controller users index rails\test> rake db:migrate Here's what the UsersController looks like: class UsersController < ApplicationController def index @users = User.all end end My routes: ActionController::Routing::Routes.draw do |map| map.resources :users end I now create views\users\index.html.haml: %table %th(style="text-align: left;") %h1 Users - for user in @users %tr %td= user.email %td= user.password Annnd run the server... I navigate to localhost:3000\users and I get this error message: Template is missing Missing template users/index.erb in view path app/views For some reason Rails is trying to find and render .erb files instead of .haml files. vendor\plugins\haml\init.rb exists, untouched. I've reinstalled Haml (Pretty Penny) multiple times and still get the same results. I've also tried adding config.gem 'haml' to my environment.rb but this also doesn't work. I can't figure out why suddenly rails will not render haml for me.

    Read the article

  • What would be the light way to render a JSP page without an App/Web Server

    - by kolrie
    First, some background: I will have to work on code for a JSP that will demand a lot of code fixing and testing. This JSP will receive a structure of given objects, and render it according to a couple of rules. What I would like to do, is to write a "Test Server" that would read some mock data out of a fixtures file, and mock those objects into a factory that would be used by the JSP in question. The target App Server is WebSphere and I would like to code, change, code in order to test appropriate HTML rendering. I have done something similar on the past, but the JSP part was just calling a method on a rendering object, so I created an Ad Hoc HTTP server that would read the fixture files, parse it and render HTML. All I had to do was run it inside RAD, change the HTML code and hit F5. So question pretty much goes down to: Is there any stand alone library or lightweight server (I thought of Jetty) that would take a JSP, and given the correct contexts (Request, Response, Session, etc.) render the proper HTML?

    Read the article

  • JSF2: Re-render all components on page that have a given ID, without absolute paths

    - by tlind
    Is there any way in JSF 2.0/PrimeFaces of re-rendering all components (using the PrimeFaces update="id1 id2..." attribute or the <f:ajax render="..."/> tag) that have got a given ID, regardless of whether they are in the same form that contains the button triggering the AJAX re-render or not? For example, I want my button to re-render all sections on a page that visualize the user's current shopping basket. Right now, I always have to specify the absolute path to the components that I want to get updated, e.g. update=":header:basket :left-sidebar:menu:basket" which is rather impractical if the structure of the page changes (besides, I have not been able to figure out the correct path for one of these components). I already tried to implement a custom EL function like this, which traverses the component tree: update="{utilBean.findAllComponentsMatchingId('basket')}" but at the time that function is evaluated, apparently not the entire component tree has been set up as it doesn't contain the components I am looking for. How can I deal with this? There certainly must be an easy way of doing AJAX-based updates of sections of the page that are not part of the current <h:form>? Thanks!

    Read the article

  • Rails Layout Rendering with controller condition

    - by Victor Martins
    I don't know what's the best way to doing this. On my application.html.erb I define a header div. My default controller ( root ) is a homepage controller. And I wish that if I'm at index of the homepage the header is rendering with some content, but all other controllers render another content inside that header. How can I make a condition in that header div to render different content based on the controller that's being rendered?

    Read the article

  • how to code multiple button navigation with java activities [migrated]

    - by user1738212
    Question 1: I have 2 activities. I was wondering how to optimize it. I can either create 2 activities with multiple listeners. Or create multiple java files for each button(onclick listener) Question 2: I have tried to create multiple listeners in one java but can only get one button to work. What is the syntax for multiple listeners in one java file? Here is my *updated code: now the issue is no matter what button is clicked on it leads to the same page. package install.fineline; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.widget.Button; import android.view.View; import android.view.View.OnClickListener; public class Activity1 extends Activity2 { Button Button1; Button Button2; Button Button3; Button Button4; Button Button5; Button Button6; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.fineline); addListenerOnButton(); } public void addListenerOnButton() { final Context context = this; Button1 = (Button) findViewById(R.id.autobody); Button1.setOnClickListener(new OnClickListener() { public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button2 = (Button) findViewById(R.id.glass); Button2.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button3 = (Button) findViewById(R.id.wheels); Button3.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button4 = (Button) findViewById(R.id.speedy); Button4.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button5 = (Button) findViewById(R.id.sevan); Button5.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); Button6 = (Button) findViewById(R.id.towing); Button6.setOnClickListener(new OnClickListener() { @Override public void onClick(View arg0) { Intent intent = new Intent(context, Activity1.class); startActivity(intent); } }); }} activity2.java package install.fineline; import android.app.Activity; import android.os.Bundle; import android.widget.Button; public class Activity2 extends Activity { Button Button1; public void onCreate1(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.autobody); } Button Button2; public void onCreate2(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.glass); } Button Button3; public void onCreate3(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.wheels); } Button button4; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.speedy); } Button Button5; public void onCreate5(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.sevan); } Button Button6; public void onCreate6(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.towing); }}

    Read the article

  • C# Conditional Compilation and framework targets

    - by McKAMEY
    There are a few minor places where code for my project may be able to be drastically improved if the target framework were a newer version. I'd like to be able to better leverage conditional compilation in C# to switch these as needed. Something like: #if NET_40 using FooXX = Foo40; #elif NET_35 using FooXX = Foo35; #else using FooXX = Foo20; #endif Do these symbols come for free? Do I need to inject these symbols as part of the project configuration? Seems easy enough to do since I'll know which framework is being targeted from msbuild. I think I've seen that NET_40 symbol isn't defined? If so I think I could do this? #if !NET_35 && !NET_20 #define NET_40 #endif Or do I need to define it in the msbuild command: /p:DefineConstants="NET_40"

    Read the article

  • WinForms app config manager is x86 and cannot reference assemblies that targets Any CPU

    - by ivos
    Hi I'm using Win7 64x and Visual Studio 2010. I created a library/framework targeting Any CPU. Then I created a new WinForms project that uses that framework, leaving the default values of the wizard. I mean, I didn't change anything. When I reference my framework, VS cannot find the assemblies. If I go to the project properties, it is targeting Any CPU (as expected, I can change it if I want). But if I go to Configuration Manager, the only choice I have for that project is x86. And I guess that is the problem. I tried to add Any CPU as a new Target but I was unable to. Could someone help me? :) Thanks in advance!

    Read the article

  • cmake: Target-specific preprocessor definitions for CUDA targets seems not to work

    - by Nils
    I'm using cmake 2.8.1 on Mac OSX 10.6 with CUDA 3.0. So I added a CUDA target which needs BLOCK_SIZE set to some number in order to compile. cuda_add_executable(SimpleTestsCUDA SimpleTests.cu BlockMatrix.cpp Matrix.cpp ) set_target_properties(SimpleTestsCUDA PROPERTIES COMPILE_FLAGS -DBLOCK_SIZE=3) When running make VERBOSE=1 I noticed that nvcc is invoked w/o -DBLOCK_SIZE=3, which results in an error, because BLOCK_SIZE is used in the code, but defined nowhere. Now I used the same definition for a CPU target (using add_executable(...)) and there it worked. So now the questions: How do I figure out what cmake does with the set_target_properties line if it points to a CUDA target? Googling around didn't help so far and a workaround would be cool..

    Read the article

  • Team Build Reports as "Failed" Even Though All Targets Succeeded

    - by benjy
    Hi, I've written a custom MSBuild script to be used with Team Build, as I am storing PHP in TFS and of course it isn't compiled. My custom script calls the CoreGet target to get the latest version of the files, and the copies them, ZIPs, them, and FTPs the ZIP archive to a testing server. All of that is working fine. The problem I am having is that despite the build succeeding - see the output in BuildLog.txt - Done executing task "BuildStep". Done building target "FTP" in project "TFSBuild.proj". Done executing task "CallTarget". Done building target "EndToEndIteration" in project "TFSBuild.proj". Done Building Project "C:\Documents and Settings\tfsservice\Local Settings\Temp\Code\PHP\BuildType\TFSBuild.proj" (EndToEndIteration target(s)). Build succeeded. 0 Warning(s) 0 Error(s) the build still reports as having failed. The log from Visual Studio looks like so: Anyone know how I can make it report as having succeeded? Thanks very much in advance, Benjy P.S.: Please let me know if anyone would find having the whole build script helpful. Thanks!

    Read the article

  • How Java Runtime Maps to Targets

    - by zharvey
    According to the Javadocs for Runtime here: Every Java application has a single instance of class Runtime that allows the application to interface with the environment in which the application is running. The current runtime can be obtained from the getRuntime method. An application cannot create its own instance of this class. My question is: what's their definition of an application? Is each JAR/WAR/EAR considered a standalone application? What about a plain ole' Driver.class class with a main() method? What about JEE containers that house EARs and EJBs? I guess I'm trying to understand how many Runtime instances could be up and running inside a complex (JEE) system. And understanding that requires me to understand what specific "things" constitute an "application" in Java terminology. Thanks in advance!

    Read the article

  • How to set which version of the VC++ runtime Visual Studio 2005 targets

    - by TallGuy
    I have an application that contains a VC++ project (along with C# projects). Previously, (i.e. during the last year or so) when a build has been done, Visual Studio 2005 appears to be targeting the VC++ runtime version 8.0.50727.762. At least, that is what the Assembly.dll.intermediate.manifest file is telling me: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> This version number matches the Visual Studio 2005 version number. The application worked fine when deployed to the webserver. The sun was shining, the birds were singing and all was right with the world. Now something has changed. I don't know what - a security patch, an obscure Visual Studio setting or something. Now Visual Studio 2005 seems to be targeting the wrong version of the VC++ runtime: <?xml version='1.0' encoding='UTF-8' standalone='yes'?> <assembly xmlns='urn:schemas-microsoft-com:asm.v1' manifestVersion='1.0'> <dependency> <dependentAssembly> <assemblyIdentity type='win32' name='Microsoft.VC80.CRT' version='8.0.50727.4053' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> </dependentAssembly> </dependency> </assembly> When I deploy the application to the webserver, I get the dreaded This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. (Exception from HRESULT: 0x800736B1) error. This problem occurs even when I recompile previous versions of the application. I can absolutely guarantee that nothing at all has changed in the solution - we zip up the entire contents of the solution as part of the build process and archive it. I have unzipped a number of these to a temp directory, verified that the previous manifest file refers to 8.0.50727.762, recompiled using exactly the same command at the command line and then verified that the new manifest file now refers to 8.0.50727.4053. I am using Microsoft Visual Studio 2005 Version 8.0.50727.762 (SP.050727-7600) and Microsoft Visual C++ 2005 77646-008-0000007-41610. Why would Visual Studio revert to a previous version of the VC++ runtime? How do I specify which version it should use? What is going wrong here?

    Read the article

  • asp:Validator in invisible elements + invisible targets

    - by Richard Neil Ilagan
    Somewhat straightforward: will asp:Validators still perform validation when they're in invisible containers? How about if their ControlToValidate target is invisible? For example: <asp:Panel id="myPanel" runat="server" visible="false"> <asp:Textbox id="myTextbox" runat="server" /> <asp:RequiredFieldValidator id="myRfv" runat="server" controltovalidate="myTextbox" /> </asp:Panel> Above is a Validator in an invisible Panel. Would myRfv still perform validation? How about if myTextbox is invisible instead? I'm asking this because I have very specialized Validators in my ASPX, wherein I also have Panels which are hidden/shown dynamically. While I'm all for disabling the validators themselves, I'm just curious whether they'll automatically disable anyway. Thanks guys! :D

    Read the article

  • HREF link that targets nothing, does not want to use hash or void(0)

    - by Mattis
    I have a link that I want to be able to click to trigger a piece of jQuery code. Currently I have <a href="#" id="foo">Link</a> and $('#foo').click(function(){ // Do stuff }); which works well. But, I have always hated using hash in this way. The page flickers and the hash is added to the page url. One alternative is to use <a href="javascript:void(0);" id="foo">Link</a> but I also dislike seeing that piece of code in the browser status bar. It looks tacky. What I'd rather have is an explanatory javascript placeholder that does nothing, like <a href="javascript:zoom();" id="foo">Link</a> which actually works, but throws an ReferenceError in the javascript console since there are no such function. What's the minimum definition of a function that does nothing? Are there any other alternatives? Should I just skip the link and use something like <span id="foo" style="cursor:pointer;cursor:hand;">Link</span> instead?

    Read the article

  • Make multiple targets in 'all'

    - by Hiett
    i'm trying to build a debug and release version of a library with a Makefile and copy those libraries to the relevant build directories, e.g. .PHONY: all clean distclean all: $(program_NAME_DEBUG) $(CP) $(program_NAME_DEBUG) $(BUILD_DIR)/debug/$(program_NAME_DEBUG) $(RM) $(program_NAME_DEBUG) $(RM) $(program_OBJS) $(program_NAME_RELEASE) $(CP) $(program_NAME_RELEASE) $(BUILD_DIR)/release/$(program_NAME_RELEASE) $(RM) $(program_NAME_RELEASE) $(RM) $(program_OBJS) $(program_NAME_DEBUG): $(program_OBJS) $(LINK_DEBUG.c) -shared -Wl,-soname,$(program_NAME_DEBUG) $(program_OBJS) -o $(program_NAME_DEBUG) $(program_NAME_RELEASE): $(program_OBJS) $(LINK_RELEASE.c) -shared -Wl,-soname,$(program_NAME_RELEASE) $(program_OBJS) -o $(program_NAME_RELEASE) The 1st target in all (program_NAME_DEBUG) compiles OK but the 2nd, (program_NAME_RELEASE) produces the following error: libGlamdring_rel.so make: libGlamdring_rel.so: Command not found make: *** [all] Error 127 libGlamdring_rel.so is the value of program_NAME_RELEASE It doesn't seem to be recognising the 2nd target as it does the 1st?

    Read the article

  • How best to implement support for multiple devices in a web application.

    - by Kabeer
    Hello. My client would like a business application to support 'every possible device'. The application in question is essentially a web application and 'every possible device', I believe encompasses mobile phones, netbooks, ipad, other browser supporting devices, etc. The application is somewhat complex w.r.t. the data it captures and other functions it performs (reporting). If I continue to honor increasing complexity in the application, I guess there are more chances of it not working on other devices. I'd like to know how web applications support multiple devices conventionally? Are there multiple versions of presentation layer (like many times I find m.website.com dedicated for mobile devices)? Further, if my application is to take advantage of Java Script, RIA (Flash, SilverLight) then what are the consequences and workarounds? Mine is a .Net based application and the stack also contains Ext JS Java Script library. While I would like to use it for sure, considering that I would be doing a lot of work in Java Script rather than HTML, this could be a problem. The answer to the above could be descriptive. If there is something already prescribed out there, please share the link(s). Thanks.

    Read the article

  • Set all link targets for a page

    - by zac
    I have links that are dynamically generated and I need to set the target for all of them. How could I do this with javaScript. I found something that looks like it should work using jQuery.. $("#myDiv a").attr('target', '_top'); but I dont want to use a library for this and I imagine a couple of lines of javaScript would take care of it... I just dont know how to write it.

    Read the article

  • How to configure a NSPopupButton for displaying multiple values in a TableView?

    - by jekmac
    Hi there! I'm using two entities A and B with to-many-to-many relationship. Lets say I got an entity A with attribute aAttrib and a to-many relationship aRelat to another entity B with attribute bAttrib and a to-many relationship bRelat with entity A. Now I am building an interface with two tables one for entity A and another for entity B. The table for entity B has two columns one for bAttrib and one for the relationship aRelat. The aRelat-column should be a NSPopupButtonCell to display multiple aAttrib values. I'd like to set all the bindings in InterfaceBuilder in Table Column Bindings: -- I have two NSArrayController each for one entity: Object Controller Mode:Entity Array Controller Bindings: Parameters Managed Object Context bind to File's Owner -- One Table Cloumn with a PopUpButtonCell: TableCloumnBindings Content bind to Entity A with ControllerKey arrangedObjects; Content Values bind to Entity A with ModelKeyPath aAttrib Selected Object bind to Entity B with ModelKeyPath bRelat I know that this configuration doesn't allow multiple value setting. But I don't know how to do the right one. Getting the following message: HIToolbox: ignoring exception 'Unacceptable type of value for to-many relationship: property = "bRelat"; desired type = NSSet; given type = NSCFString; value = testValue.' that raised inside Carbon event dispatch... Does anyone have any idea?

    Read the article

  • Azure, don't give me multiple VMs, give me one elastic VM

    - by FransBouma
    Yesterday, Microsoft revealed new major features for Windows Azure (see ScottGu's post). It all looks shiny and great, but after reading most of the material describing the new features, I still find the overall idea behind all of it flawed: why should I care on how much VMs my web app runs? Isn't that a problem to solve for the Windows Azure engineers / software? And what if I need the file system, why can't I simply get a virtual filesystem ? To illustrate my point, let's use a real example: a product website with a customer system/database and next to it a support site with accompanying database. Both are written in .NET, using ASP.NET and use a SQL Server database each. The product website offers files to download by customers, very simple. You have a couple of options to host these websites: Buy a server, place it in a rack at an ISP and run the sites on that server Use 'shared hosting' with an ISP, which means your sites' appdomains are running on the same machine, as well as the files stored, and the databases are hosted in the same server as the other shared databases. Hire a VM, install your OS of choice at an ISP, and host the sites on that VM, basically the same as the first option, except you don't have a physical server At some cloud-vendor, either host the sites 'shared' or in a VM. See above. With all of those options, scalability is a problem, even the cloud-based ones, though not due to the same reasons: The physical server solution has the obvious problem that if you need more power, you need to buy a bigger server or more servers which requires you to add replication and other overhead Shared hosting solutions are almost always capped on memory usage / traffic and database size: if your sites get too big, you have to move out of the shared hosting environment and start over with one of the other solutions The VM solution, be it a VM at an ISP or 'in the cloud' at e.g. Windows Azure or Amazon, in theory allows scaling out by simply instantiating more VMs, however that too introduces the same overhead problems as with the physical servers: suddenly more than 1 instance runs your sites. If a cloud vendor offers its services in the form of VMs, you won't gain much over having a VM at some ISP: the main problems you have to work around are still there: when you spin up more than one VM, your application must be completely stateless at any moment, including the DB sub system, because what's in memory in instance 1 might not be in memory in instance 2. This might sounds trivial but it's not. A lot of the websites out there started rather small: they were perfectly runnable on a single machine with normal memory and CPU power. After all, you don't need a big machine to run a website with even thousands of users a day. Moving these sites to a multi-VM environment will cause a problem: all the in-memory state they use, all the multi-page transitions they use while keeping state across the transition, they can't do that anymore like they did that on a single machine: state is something of the past, you have to store every byte of state in either a DB or in a viewstate or in a cookie somewhere so with the next request, all state information is available through the request, as nothing is kept in-memory. Our example uses a bunch of files in a file system. Using multiple VMs will require that these files move to a cloud storage system which is mounted in each VM so we don't have to store the files on each VM. This might require different file paths, but this change should be minor. What's perhaps less minor is the maintenance procedure in place on the new type of cloud storage used: instead of ftp-ing into a VM, you might have to update the files using different ways / tools. All in all this makes moving an existing website which was written for an environment that's based around a VM (namely .NET with its CLR) overly cumbersome and problematic: it forces you to refactor your website system to be able to be used 'in the cloud', which is caused by the limited way how e.g. Windows Azure offers its cloud services: in blocks of VMs. Offer a scalable, flexible VM which extends with my needs Instead, cloud vendors should offer simply one VM to me. On that VM I run the websites, store my DB and my files. As it's a virtual machine, how this machine is actually ran on physical hardware (e.g. partitioned), I don't care, as that's the problem for the cloud vendor to solve. If I need more resources, e.g. I have more traffic to my server, way more visitors per day, the VM stretches, like I bought a bigger box. This frees me from the problem which comes with multiple VMs: I don't have any refactoring to do at all: I can simply build my website as if it runs on my local hardware server, upload it to the VM offered by the cloud vendor, install it on the VM and I'm done. "But that might require changes to windows!" Yes, but Microsoft is Windows. Windows Azure is their service, they can make whatever change to what they offer to make it look like it's windows. Yet, they're stuck, like Amazon, in thinking in VMs, which forces developers to 'think ahead' and gamble whether they would need to migrate to a cloud with multiple VMs in the future or not. Which comes down to: gamble whether they should invest time in code / architecture which they might never need. (YAGNI anyone?) So the VM we're talking about, is that a low-level VM which runs a guest OS, or is that VM a different kind of VM? The flexible VM: .NET's CLR ? My example websites are ASP.NET based, which means they run inside a .NET appdomain, on the .NET CLR, which is a VM. The only physical OS resource the sites need is the file system, however this too is accessed through .NET. In short: all the websites see is what .NET allows the websites to see, the world as the websites know it is what .NET shows them and lets them access. How the .NET appdomain is run physically, that's the concern of .NET, not mine. This begs the question why Windows Azure doesn't offer virtual appdomains? Or better: .NET environments which look like one machine but could be physically multiple machines. In such an environment, no change has to be made to the websites to migrate them from a local machine or own server to the cloud to get proper scaling: the .NET VM will simply scale with the need: more memory needed, more CPU power needed, it stretches. What it offers to the application running inside the appdomain is simply increasing, but not fragmented: all resources are available to the application: this means that the problem of how to scale is back to where it should be: with the cloud vendor. "Yeah, great, but what about the databases?" The .NET application communicates with the database server through a .NET ADO.NET provider. Where the database is located is not a problem of the appdomain: the ADO.NET provider has to solve that. I.o.w.: we can host the databases in an environment which offers itself as a single resource and is accessible through one connection string without replication overhead on the outside, and use that environment inside the .NET VM as if it was a single DB. But what about memory replication and other problems? This environment isn't simple, at least not for the cloud vendor. But it is simple for the customer who wants to run his sites in that cloud: no work needed. No refactoring needed of existing code. Upload it, run it. Perhaps I'm dreaming and what I described above isn't possible. Yet, I think if cloud vendors don't move into that direction, what they're offering isn't interesting: it doesn't solve a problem at all, it simply offers a way to instantiate more VMs with the guest OS of choice at the cost of me needing to refactor my website code so it can run in the straight jacket form factor dictated by the cloud vendor. Let's not kid ourselves here: most of us developers will never build a website which needs a truck load of VMs to run it: almost all websites created by developers can run on just a few VMs at most. Yet, the most expensive change is right at the start: moving from one to two VMs. As soon as you have refactored your website code to run across multiple VMs, adding another one is just as easy as clicking a mouse button. But that first step, that's the problem here and as it's right there at the beginning of scaling the website, it's particularly strange that cloud vendors refuse to solve that problem and leave it to the developers to solve that. Which makes migrating 'to the cloud' particularly expensive.

    Read the article

  • T420 Triple Head with Optimus

    - by Rolo
    I see that this is possible on a T520 apparently (Triple-head on a Lenovo T520), but I can't see anyone claiming it's possible on a T420. I'm running 12.04 and have Bumbleebee installed and working fine but I can't get the display port monitor to display anything. The power light flicks on, but things only render on my VGA output monitor, and Ubuntu's display settings don't detect the third monitor. I'm not concerned with power management, ie, am happy to leave set on discrete graphics in the BIOS if that helps. Is this possible? Thanks.

    Read the article

  • Rendering a DOM across multiple displays

    - by meetamit
    I'm building a data-driven animation with HTML and javascript to run in a web browser. I would like to display it tiled across three 1080p monitors. This essentially yields a viewport that's 5760px wide and 1080px tall. Pretty large. Does anyone have experience setting up something like this? I have many questions below, but any tip would be appreciated: Is it reasonable to expect a DOM to render into such a large viewport size at close to 60fps? I might choose to use canvas, instead of SVG or HTML, but that would yield a giant canvas. Can a canvas with such high resolution be performant? Of course everything depends on the complexity of the graphics I want to render, but I'm looking to remove that factor from this question, so assume I'm asking about a canvas animation that can run at 60fps at 1920x1080 resolution. Would it run roughly as fast at 3 times the width? Would three.js and WebGL be a more proper approach at that resolution? How do you actually cause Chrome or FF to span 3 monitors at full screen? Do I need a 3rd party solution of any kind? Thanks!

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >