Search Results

Search found 14531 results on 582 pages for 'proxy pass'.

Page 344/582 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • EJB Persist On Master Child Relationship

    - by deepak.siddappa(at)oracle.com
    Let us take scenario where in users wants to persist master child relationship. Here will have two tables dept, emp (using Scott Schema) which are having master child relation.Model Diagram: Here in the above model diagram, Dept is the Master table and Emp is child table and Dept is related to emp by one to n relationship. Lets assume we need to make new entries in emp table using EJB persist method. Create a Emp form manually dropping the fields, where deptno will be dropped as Single Selection -> ADF Select One Choice (which is a foreign key in emp table) from deptFindAll DC. Make sure to bind all field variables in backing bean.Employee Form:Once the Emp form created, If the persistEmp() method is used to commit the record this will persist all the Emp fields into emp table except deptno, because the deptno will be passed as a Object reference in persistEmp method  (Its foreign key reference). So directly deptno can't be passed to the persistEmp method instead deptno should be explicitly set to the emp object, then the persist will save the deptno to the emp table.Below solution is one way of work around to achieve this scenario -Create a method in sessionBean for adding emp records and expose this method in DataControl.     For Ex: Here in the below code 'em" is a EntityManager.            private EntityManager em - will be member variable in sessionEJBBeanpublic void addEmpRecord(String ename, String job, BigDecimal deptno) { Emp emp = new Emp(); emp.setEname(ename); emp.setJob(job); //setting the deptno explicitly Dept dept = new Dept(); dept.setDeptno(deptno); //passing the dept object emp.setDept(dept); //persist the emp object data to Emp table em.persist(emp); }From DataControl palette Drop addEmpRecord as Method ADF button, In Edit action binding window enter the parameter values which are binded in backing bean.     For Ex:     If the name deptno textfield is binded with "deptno" variable in backing bean, then El Expression Builder pass value as "#{backingbean.deptno.value}"Binding:

    Read the article

  • How to get initial API right using TDD?

    - by Vytautas Mackonis
    This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties. Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class. When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

    Read the article

  • OpenJDK 6 B24 Available

    - by user9158633
    On November 16, 2011 the source bundle for OpenJDK 6 b24 was published at http://download.java.net/openjdk/jdk6/. The main changes in b24 are the latest round of security updates (e.g. the security changes in jdk repo) and a few other fixes.  For more information see the detailed list of all the changes in OpenJDK 6 B24. Test Results: All the jdk regression tests run with  make test passed. cd jdk6 make make test Per Kelly's  B23 Release blog: The new process is - all the jdk regression tests run with make test should just pass. Over time we will fix the tests that have been excluded, possibly add more tests, and exclude tests that fail to demonstrate stability (with a bug filed against the test). For the current list of excluded tests see  jdk6/jdk/test/ProblemList.txt file: ProblemList.html in B24  |  Latest ProblemList.txt (in the tip revision). Special thanks to Kelly O'Hair for his direction and Dave Katleman for his Release Engineering work.

    Read the article

  • How to refactor and improve this XNA mouse input code?

    - by Andrew Price
    Currently I have something like this: public bool IsLeftMouseButtonDown() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Pressed; } public bool IsLeftMouseButtonPressed() { return currentMouseState.LeftButton == ButtonState.Pressed && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonUp() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Released; } public bool IsLeftMouseButtonReleased() { return currentMouseState.LeftButton == ButtonState.Released && previousMouseSate.LeftButton == ButtonState.Pressed; } This is fine. In fact, I kind of like it. However, I'd hate to have to repeat this same code five times (for right, middle, X1, X2). Is there any way to pass in the button I want to the function so I could have something like this? public bool IsMouseButtonDown(MouseButton button) { return currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonPressed(MouseButton button) { return currentMouseState.IsPressed(button) && !previousMouseState.IsPressed(button); } public bool IsMouseButtonUp(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } public bool IsMouseButtonReleased(MouseButton button) { return !currentMouseState.IsPressed(button) && previousMouseState.IsPressed(button); } I suppose I could create some custom enumeration and switch through it in each function, but I'd like to first see if there is a built-in solution or a better way.. Thanks!

    Read the article

  • Mutating Programming Language?

    - by MattiasK
    For fun I was thinking about how one could build a programming language that differs from OOP and came up with this concept. I don't have a strong foundation in computer science so it might be common place without me knowing it (more likely it's just a stupid idea :) I apologize in advance for this somewhat rambling question :) Anyways here goes: In normal OOP methods and classes are variant only upon parameters, meaning if two different classes/methods call the same method they get the same output. My, perhaps crazy idea, is that the calling method and class could be an "invisible" part of it's signature and the response could vary depending on who call's an method. Say that we have a Window object with a Break() method, now anyone (who has access) could call this method on Window with the same result. Now say that we have two different objects, Hammer and SledgeHammer. If Break need to produce different results based on these we'd pass them as parameters Break(IBluntObject bluntObject) With a mutating programming language (mpl) the operating objects on the method would be visible to the Break Method without begin explicitly defined and it could adopt itself based on them). So if SledgeHammer calls Window.Break() it would generate vastly different results than if Hammer did so. If OOP classes are black boxes then MPL are black boxes that knows who's (trying) to push it's buttons and can adapt accordingly. You could also have different permission sets on methods depending who's calling them rather than having absolute permissions like public and private. Does this have any advantage over OOP? Or perhaps I should say, would it add anything to it since you should be able to simply add this aspect to methods (just give access to a CallingMethod and CallingClass variable in context) I'm not sure, might be to hard to wrap one's head around, it would be kinda interesting to have classes that adopted themselves to who uses them though. Still it's an interesting concept, what do you think, is it viable?

    Read the article

  • Node.JS testing with Jasmine, databases, and pre-existing code

    - by Jim Rubenstein
    I've recently built the start of a core system which is likely going turn into a monster product. I'm building the system with node.js, and decided after I got a small base built, that It'd be a great idea to start using some sort of automated test suite to test the application. I decided to use jasmine, as it seems pretty solid and has a lot of features for stubbing spying and mocking methods and classes. The application has a lot of external data stores and api access (kestrel, mysql, mongodb, facebook, and more). My issue is, I've got a good amount of code written that I want to start testing - as it represents the underpinnings of the application. What are the best practices for testing methods/classes that access external APIs that I may or may not have control over? As an example, I have a data structure that fetches a bunch of data from a MySQL database. I want to test the method that retrieves the data; and I'm not sure how to go about it. I could test the fetch method which is supposed to return an array of objects, but to isolate the method from the database, I need to define my own fixture data. So what I end up doing is stubbing the mysql execution, and returning a static dataset. So, I end up writing a function that returns the dataset that makes my test pass. That doesn't seem to actually test the code, other than verifying a method is being called. I know this is kind of abstract and vague, it seems that the idea of testing is very much abstract though, so hopefully someone has some experience and can guide me in the right direction. Any advice, or reading I can do is more than welcomed. Thanks in advance.

    Read the article

  • What are the boundaries of the product owner in scrum?

    - by Saeed Neamati
    In another question, I asked about why I feel scrum turns active developers into passive developers, and it seems that the overall problem is not scrumy (related to scrum), and rather it's related to the bad implementation of scrum. So, here I have some questions about the scope of the responsibilities of PO (product owner) and the limitations he/she shouldn't pass. Should PO interfere the UI design, when there are designers at work in scrum team? (an example of this which has happened to us, is to replace checkboxes with a drop down list with two items, namely, yes and no; or to make some boxes larger, or to left-align some content instead of centering them on the page, or stuff like that). If yeah, to what extent? Colors? Layout? Should PO interfere in Design and architecture of coding? This hasn't happened to us yet, but I'm really curious about the boundaries. For example does PO has the right to change the platform (moving from ASP.NET MVC to PHP, or something like that), or choosing the count of servers (tier architecture), etc. Should PO interfere in validation mechanisms? For example, this field should be required, or we don't need to get this piece of information from user. Sometimes, analyzers and designers confirm that something can be handled behind the scene, like extracting the user profile info from another source, instead of asking for it in UI. How granular could/should PO get into the analysis and design? For example, a user story might be: "As a customer, I'd like to be able to buy new domains online". However, scrum team can implement this user story in a wizard of five steps, or in one single page. To which level PO should monitor, or govern, or supervise the technical analysis, design, and implementation? I asked these questions to judge whether our implementation is right or wrong?

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • Writing to a D3DFMT_R32F render target clamps to 1

    - by Mike
    I'm currently implementing a picking system. I render some objects in a frame buffer, which has a render target, which has the D3DFMT_R32F format. For each mesh, I set an integer constant evaluator, which is its material index. My shader is simple: I output the position of each vertex, and for each pixel, I cast the material index in float, and assign this value to the Red channel: int ObjectIndex; float4x4 WvpXf : WorldViewProjection< string UIWidget = "None"; >; struct VS_INPUT { float3 Position : POSITION; }; struct VS_OUTPUT { float4 Position : POSITION; }; struct PS_OUTPUT { float4 Color : COLOR0; }; VS_OUTPUT VSMain( const VS_INPUT input ) { VS_OUTPUT output = (VS_OUTPUT)0; output.Position = mul( float4(input.Position, 1), WvpXf ); return output; } PS_OUTPUT PSMain( const VS_OUTPUT input, in float2 vpos : VPOS ) { PS_OUTPUT output = (PS_OUTPUT)0; output.Color.r = float( ObjectIndex ); output.Color.gba = 0.0f; return output; } technique Default { pass P0 { VertexShader = compile vs_3_0 VSMain(); PixelShader = compile ps_3_0 PSMain(); } } The problem I have, is that somehow, the values written in the render target are clamped between 0.0f and 1.0f. I've tried to change the rendertarget format, but I always get clamped values... I don't know what the root of the problem is. For information, I have a depth render target attached to the frame buffer. I disabled the blend in the render state the stencil is disabled Any ideas?

    Read the article

  • Pursuing violators of software license/copyright

    - by Dmitry Brant
    I've recently discovered a seller on eBay who is selling CDs with my (trialware) software on it. The seller is clearly trying to pass the software off as his own; he's copied all the verbiage from my software's website, except its actual name. This seller also sells a whole bunch of other CDs with free software for which he's misrepresenting authorship. For example, this listing contains screen shots that are obviously of the free program InfraRecorder. However, the name InfraRecorder or its authors aren't mentioned anywhere. Before I splurge on official legal assistance, does the community have any recommendations or past experiences with these kinds of matters? What's the best way to proceed, and at the very least, have the eBay listings taken down? Is it possible to reclaim the earnings from the sales of these CDs (not just for me, but for the other authors of the free software that this person is selling)? I realize that GPL'd software doesn't have any restrictions on "selling" the software, but this person has gone to great lengths to obfuscate the software's authorship, which is surely a violation of the license. (My software is not GPL; it's a custom license, and it does not permit redistribution of any kind without permission)

    Read the article

  • At which point is a continuous integration server interesting?

    - by Cedric Martin
    I've been reading a bit about CI servers like Jenkins and I'm wondering: at which point is it useful? Because surely for a tiny project where you'd have only 5 classes and 10 unit tests, there's no real need. Here we've got about 1500 unit tests and they pass (on old Core 2 Duo workstations) in about 90 seconds (because they're really testing "units" and hence are very fast). The rule we have is that we cannot commit code when a test fail. So each developers launches all his tests to prevent regression. Obviously, because all the developers always launch all the test we catch errors due to conflicting changes as soon as one developer pulls the change of another (when any). It's still not very clear to me: should I set up a CI server like Jenkins? What would it bring? Is it just useful for the speed gain? (not an issue in our case) Is it useful because old builds can be recreated? (but we can do this to with Mercurial, by checking out old revs) Basically I understand it can be useful but I fail to see exactly why. Any explanation taking into account the points I raised above would be most welcome.

    Read the article

  • SSIS Denali as part of “Enterprise Information Management”

    - by jorg
    When watching the SQL PASS session “What’s Coming Next in SSIS?” of Steve Swartz, the Group Program Manager for the SSIS team, an interesting question came up: Why is SSIS thought of to be BI, when we use it so frequently for other sorts of data problems? The answer of Steve was that he breaks the world of data work into three parts: Process of inputs BI   Enterprise Information Management All the work you have to do when you have a lot of data to make it useful and clean and get it to the right place. This covers master data management, data quality work, data integration and lineage analysis to keep track of where the data came from. All of these are part of Enterprise Information Management. Next, Steve told Microsoft is developing SSIS as part of a large push in all of these areas in the next release of SQL. So SSIS will be, next to a BI tool, part of Enterprise Information Management in the next release of SQL Server. I'm interested in the different ways people use SSIS, I've basically used it for ETL, data migrations and processing inputs. In which ways did you use SSIS?

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • Matrix Multiplication with C++ AMP

    - by Daniel Moth
    As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now. The code for serial and accelerated Consider a naïve (or brute force) serial implementation of matrix multiplication  0: void MatrixMultiplySerial(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 1: { 2: for (int row = 0; row < M; row++) 3: { 4: for (int col = 0; col < N; col++) 5: { 6: float sum = 0.0f; 7: for(int i = 0; i < W; i++) 8: sum += vA[row * W + i] * vB[i * N + col]; 9: vC[row * N + col] = sum; 10: } 11: } 12: } We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h> 13: void MatrixMultiplySimple(std::vector<float>& vC, const std::vector<float>& vA, const std::vector<float>& vB, int M, int N, int W) 14: { 15: concurrency::array_view<const float,2> a(M, W, vA); 16: concurrency::array_view<const float,2> b(W, N, vB); 17: concurrency::array_view<concurrency::writeonly<float>,2> c(M, N, vC); 18: concurrency::parallel_for_each(c.grid, 19: [=](concurrency::index<2> idx) restrict(direct3d) { 20: int row = idx[0]; int col = idx[1]; 21: float sum = 0.0f; 22: for(int i = 0; i < W; i++) 23: sum += a(row, i) * b(i, col); 24: c[idx] = sum; 25: }); 26: } First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has been transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper. Using array_view and extent When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(direct3d) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator. Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one: extent<2> e(M, W); array_view<const float, 2> a(e, vA); In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already? Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, only copy it back. The kernel dispatch On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel. The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a grid) since the array_view object helpfully exposes that as a property. Note that instead of c.grid we could have written grid<2>(c.extent) or grid<2>(extent<2>(M, N)) – the result is the same in that we have specified M*N threads to execute our lambda. The second argument is a restrict(direct3d) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view. The kernel itself The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ). The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner). I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum Targeting a specific accelerator Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18: vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators(); accelerator acc = accs[0]; …and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become: concurrency::parallel_for_each(acc.default_view, c.grid, ...and the rest of your code remains the same… how simple is that? Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Canon MG6100 series USB Printer not mounting

    - by user35201
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0

    Read the article

  • How to manage focus for a small set of simple widgets

    - by Christoph
    I'm developing a set of simple widgets for a small (128x128) display. For example I'd like to have a main screen with an overlay menu which I can use to toggle visibilty of main screen elements. Each option would be an icon with a box around it while it is selected. Button (left, up, right, down, enter) events should be given to the widget that has "focus". Focus is a simple thing to understand when using a GUI, but I'm having trouble implementing this. Can you suggest a simple concept for managing focus and input events? I have these simple ideas: Only one widget can have focus, so I need a single pointer to that widget. When this widget gets some kind of "cycle" input (as in "highlight the next item in this list"), the focus is given to a different widget. a widget must have a way of telling the application which widget the focus is given to next. if a widget cannot give a "next focus" hint, the application must be able to figure out where the focus should go. Widgets are currently structured like this: A widget can have a parent, which is passed to the constructor. Widgets are created statically, as I want to avoid dynamic memory allocation (I only have 16kB of RAM and I'd like to have control over that). widgets have siblings, implemented as an intrusive linked list (they have a next member). A parent has a pointer to the head of its list of children. Input events are arguments to the widgets buttonEvent methods which can accept or ignore that event. If it ignores the event, it can pass the event to its parent. My Questions: How can I manage focus for these widgets? Am I making this too complicated?

    Read the article

  • Canon MG6100 series USB printer receives job but doesn't physically print

    - by Old-linux-fan
    Printer MP6150 driver installed itself upon plugging in the printer. Printer is recognized (lsusb shows it) but does not mount. If the printer is recognized, the driver must be working (or?), but something is blocking the system from mounting the printer. Tried the usual things: power of printer, restart Ubuntu etc. Listed below result of lsusb and fstab: hans@kontor-linux:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 04a9:174a Canon, Inc. Bus 002 Device 002: ID 1058:1001 Western Digital Technologies, Inc. External Hard Disk [Elements] Bus 004 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser hans@kontor-linux:~$ sudo cat /etc/fstab [sudo] password for hans: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=eaf3b38d-1c81-4de9-98d4-3834d674ff6e / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=93a667d3-6132-45b5-ad51-1f8a46c5b437 none swap sw 0 0

    Read the article

  • Sending changes to a terrain heightmap over UDP

    - by Floomi
    This is a more conceptual, thinking-out-loud question than a technical one. I have a 3D heightmapped terrain as part of a multiplayer RTS that I would like to terraform over a network. The terraforming will be done by units within the gameworld; the player will paint a "target heightmap" that they'd like the current terrain to resemble and units will deform towards that on their own (a la Perimeter). Given my terrain is 257x257 vertices, the naive approach of sending heights when they change will flood the bandwidth very quickly - updating a quarter of the terrain every second will hit ~66kB/s. This is clearly way too much. My next thought was to move to a brush-based system, where you send e.g. the centre of a circle, its radius, and some function defining the influence of the brush from the centre going outwards. But even with reliable UDP the "start" and "stop" messages could still be delayed. I guess I could compare timestamps and compensate for this, although it'd likely mean that clients would deform verts too much on their local simulations and then have to smooth them back to the correct heights. I could also send absolute vert heights in the "start" and "stop" messages to guarantee correct data on the clients. Alternatively I could treat brushes in a similar way to units, and do the standard position + velocity + client-side prediction jazz on them, with the added stipulation that they deform terrain within a certain radius around them. The server could then intermittently do a pass and send (a subset of) recently updated verts to clients as and when there's bandwidth to spare. Any other suggestions, or indications that I'm on the right (or wrong!) track with any of these ideas would be greatly appreciated.

    Read the article

  • Heading to GTC 2010

    - by Daniel Moth
    Next week the GPU Technology Conference (GTC) 2010 takes place in San Jose, CA and I am lucky enough to be attending the entire week. It has been an extremely long time (in fact, I can't remember the last time) where I am registered as an attendee at a conference (full pass/access) without being a speaker *and* without having any booth duty! Having said that, we (our team at Microsoft) will be running GPU debugging UX studies throughout the entire week (similar to what I had previously advertised). If you are attending GTC 2010 and you are interested, look for the related flyer in your conference bag. The conference is an excellent opportunity to connect in-person with various individuals that I have only met virtually. From an educational perspective there is a very long and interesting session list, with multiple concurrent slots, making it very hard to choose between them, but I have managed to create my (packed) schedule. I am most looking forward to sessions on the programming languages and tools, both from Microsoft and MS partners. For full conference details, visit the GTC 2010 official page. Comments about this post welcome at the original blog.

    Read the article

  • Pattern for a class that does only one thing

    - by Heinzi
    Let's say I have a procedure that does stuff: void doStuff(initalParams) { ... } Now I discover that "doing stuff" is quite a compex operation. The procedure becomes large, I split it up into multiple smaller procedures and soon I realize that having some kind of state would be useful while doing stuff, so that I need to pass less parameters between the small procedures. So, I factor it out into its own class: class StuffDoer { private someInternalState; public Start(initalParams) { ... } // some private helper procedures here ... } And then I call it like this: new StuffDoer().Start(initialParams); or like this: new StuffDoer(initialParams).Start(); And this is what feels wrong. When using the .NET or Java API, I always never call new SomeApiClass().Start(...);, which makes me suspect that I'm doing it wrong. Sure, I could make StuffDoer's constructor private and add a static helper method: public static DoStuff(initalParams) { new StuffDoer().Start(initialParams); } But then I'd have a class whose external interface consists of only one static method, which also feels weird. Hence my question: Is there a well-established pattern for this type of classes that have only one entry point and have no "externally recognizable" state, i.e., instance state is only required during execution of that one entry point?

    Read the article

  • Prevent oversteering catastrophe in racing games

    - by jdm
    When playing GTA III on Android I noticed something that has been annoying me in almost every racing game I've played (maybe except Mario Kart): Driving straight ahead is easy, but curves are really hard. When I switch lanes or pass somebody, the car starts swiveling back and forth, and any attempt to correct it makes it only worse. The only thing I can do is to hit the brakes. I think this is some kind of oversteering. What makes it so irritating is that it never happens to me in real life (thank god :-)), so 90% of the games with vehicles inside feel unreal to me (despite probably having really good physics engines). I've talked to a couple of people about this, and it seems either you 'get' racing games, or you don't. With a lot of practice, I did manage to get semi-good at some games (e.g. from the Need for Speed series), by driving very cautiously, braking a lot (and usually getting a cramp in my fingers). What can you do as a game developer to prevent the oversteering resonance catastrophe, and make driving feel right? (For a casual racing game, that doesn't strive for 100% realistic physics) I also wonder what games like Super Mario Kart exactly do differently so that they don't have so much oversteering? I guess one problem is that if you play with a keyboard or a touchscreen (but not wheels and pedals), you only have digital input: gas pressed or not, steering left/right or not, and it's much harder to steer appropriately for a given speed. The other thing is that you probably don't have a good sense of speed, and drive much faster than you would (safely) in reality. From the top of my head, one solution might be to vary the steering response with speed.

    Read the article

  • Oracle Kicks Summer off Right with Our FY14 Global Partner Kickoff

    - by Kristin Rose
    Are you ready to see what Oracle has in store for FY14? How about hearing from top Oracle Executives like Mark Hurd and Thomas Kurian, as they explain how you can make more money with Oracle? Are you reading this blog? If you answered yes to any of the above questions and want to find out how Oracle’s strategy for the next fiscal year will affect your business, don’t miss Oracle’s FY14 Global Partner Kickoff event taking place Tuesday, June 25th at 9:00AM PT. Kick start your summer by participating in this live Oracle PartnerNetwork event! Here is how you can: 1. Send questions during the live event via Twitter using @oraclepartners and #OPN in your tweets! 2. Take part in our exciting FY14 Partner Kickoff Twitter contest and get entered to win a FREE OPN Exchange pass! Do this by tweeting @oraclepartners a creative picture of your team watching the LIVE event. The winner will be selected at the end of the show! 3. Finally, help us spread the word to the Twitter community using this tweet: “Can’t wait for #Oracle’s Global FY14 Partner Kickoff tomorrow 6/25 at 9AM PT! Join the #OPN conversation! http://bit.ly/12goIPR” You can join the live event by visiting the OPN homepage or OPN's Facebook page the day of the event. Red Stack. Red team. Engineered for Growth! The OPN Communications Team 

    Read the article

  • INETA NorAm Component Code Challenge

    - by Chris Williams
    Want to win a trip to TechEd 2011? INETA NorAm is hosting a contest with our partners to see who can build an .NET application making effective use of reusable components to solve a problem. The Rules: Any .NET Application (WinForms, ASP.NET, WPF, Silverlight, Windows Phone 7, etc.) built in the last year (since 1/1/2010) using at least 1 component from at least 1 approved vendor. Then make a 3 - 5 minute Camtasia video showing your entry and describing what component(s) you used and why your application is cool. Our judges will review the submissions and the best two will win a scholarship to Tech·Ed 2011, May 16-19 in Atlanta GA including airfare, hotel, and conference pass. The Judging: Entries will be judged on four criteria: Effective use of a component to solve a problem/display data Innovative use of components Impact using components (i.e. reduction in lines of code written, increased productivity, etc.) Most creative use of a component. Timeline: Hurry! The submission deadline is March 15, 2011 at Midnight Eastern Standard Time. More information can be found on the INETA Component Code Challenge page: http://ineta.org/CodeChallenge/Default.aspx

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >