Search Results

Search found 35149 results on 1406 pages for 'yield return'.

Page 351/1406 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Algorithm for optimal control on space ship using accelerometer input data

    - by mm24
    Does someone have a good algorithm for controlling a space ship in a vertical shooter game using acceleration data? I have done a simple algorithm, but works very badly. I save an initial acceleration value (used to calibrate the movement according to the user's initial position) and I do subtract it from the current acceleration so I get a "calibrated" value. The problem is that basing the movement solely on relative acceleration has an effect of loss of sensitivity: certain movements are independent from the initial position. Would anyone be able to share a a better solution? I am wondering if I should use/integrate also inputs from gyroscope hardware. Here is my sample of code for a Cocos2d iOS game: - (void) accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration { if (calibrationLayer.visible){ [self evaluateCalibration:acceleration]; initialAccelleration=acceleration; return; } if([self evaluatePause]){ return; } ShooterScene * shooterScene = (ShooterScene *) [self parent]; ShipEntity *playerSprite = [shooterScene playerShip]; float accellerationtSensitivity = 0.5f; UIAccelerationValue xAccelleration = acceleration.x - initialAccelleration.x; UIAccelerationValue yAccelleration = acceleration.y - initialAccelleration.y; if(xAccelleration > 0.05 || xAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } else if(yAccelleration > 0.05 || yAccelleration < -0.05) { [playerSprite setPosition:CGPointMake(playerSprite.position.x + xAccelleration * 80, playerSprite.position.y + yAccelleration * 80)]; } }

    Read the article

  • Validating data to nest if or not within try and catch

    - by Skippy
    I am validating data, in this case I want one of three ints. I am asking this question, as it is the fundamental principle I'm interested in. This is a basic example, but I am developing best practices now, so when things become more complicated later, I am better equipped to manage them. Is it preferable to have the try and catch followed by the condition: public static int getProcType() { try { procType = getIntInput("Enter procedure type -\n" + " 1 for Exploratory,\n" + " 2 for Reconstructive, \n" + "3 for Follow up: \n"); } catch (NumberFormatException ex) { System.out.println("Error! Enter a valid option!"); getProcType(); } if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println("Error! Enter a valid option!"); getProcType(); } return procType; } Or is it better to put the if within the try and catch? public static int getProcType() { try { procType = getIntInput("Enter procedure type -\n" + " 1 for Exploratory,\n" + " 2 for Reconstructive, \n" + "3 for Follow up: \n"); if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println("Error! Enter a valid option!"); getProcType(); } } catch (NumberFormatException ex) { System.out.println("Error! Enter a valid option!"); getProcType(); } return procType; } I am thinking the if within the try, may be quicker, but also may be clumsy. Which would be better, as my programming becomes more advanced?

    Read the article

  • Variables in static library are never initialized. Why?

    - by Coyote
    I have a bunch of variables that should be initialized then my game launches, but must of them are never initialized. Here is an example of the code: MyClass.h class MyClass : public BaseObject { DECLARE_CLASS_RTTI(MyClass, BaseObject); ... }; MyClass.cpp REGISTER_CLASS(MyClass) Where REGISTER_CLASS is a macro defined as follow #define REGISTER_CLASS(className)\ class __registryItem##className : public __registryItemBase {\ virtual className* Alloc(){ return NEW className(); }\ virtual BaseObject::RTTI& GetRTTI(){ return className::RTTI; }\ }\ \ const __registryItem##className __registeredItem##className(#className); and __registryItemBase looks like this: class __registryItemBase { __registryItemBase(const _string name):mName(name){ ClassRegistry::Register(this); } const _string mName; virtual BaseObject* Alloc() = 0; virtual BaseObject::RTTI& GetRTTI() = 0; } Now the code is similar to what I currently have and what I have works flawlessly, all the registered classes are registered to a ClassManager before main(...) is called. I'm able to instantiate and configure components from scripts and auto-register them to the right system etc... The problem arrises when I create a static library (currently for the iPhone, but I fear it will happen with android as well). In that case the code in the .cpp files is never registered. Why is the resulting code not executed when it is in the library while the same code in the program's binary is always executed? Bonus questions: For this to work in the static library, what should I do? Is there something I am missing? Do I need to pass a flag when building the lib? Should I create another structure and init all the __registeredItem##className using that structure?

    Read the article

  • MiniMax not working properly(for checkers game)

    - by engineer
    I am creating a checkers game but My miniMax is not functioning properly,it is always switching between two positions for its move(index 20 and 17).Here is my code: public double MiniMax(int[] board, int depth, int turn, int red_best, int black_best) { int source; int dest; double MAX_SCORE=-INFINITY,newScore; int MAX_DEPTH=3; int[] newBoard=new int[32]; generateMoves(board,turn); System.arraycopy(board, 0, newBoard, 0, 32); if(depth==MAX_DEPTH) { return Evaluation(turn,board);} for(int z=0;z<possibleMoves.size();z+=2){ source=Integer.parseInt(possibleMoves.elementAt(z).toString()); System.out.println("SOURCE= "+source); dest=Integer.parseInt(possibleMoves.elementAt(z+1).toString());//(int[])possibleMoves.elementAt(z+1); System.out.println("DEST = "+dest); applyMove(newBoard,source,dest); newScore=MiniMax(newBoard,depth+1,opponent(turn),red_best, black_best); if(newScore>MAX_SCORE) {MAX_SCORE=newScore;maxSource=source; maxDest=dest;}//maxSource and maxDest will be used to perform the move. if (MAX_SCORE > black_best) { if (MAX_SCORE >= red_best) break; /* alpha_beta cutoff */ else black_best = (int) MAX_SCORE; //the_score } if (MAX_SCORE < red_best) { if (MAX_SCORE<= black_best) break; /* alpha_beta cutoff */ else red_best = (int) MAX_SCORE; //the_score } }//for ends return MAX_SCORE; } //end minimax I am unable to find out the logical mistake. Any idea what's going wrong?

    Read the article

  • QapTcha error issue. Works locally, not on live server [migrated]

    - by BlassFemur
    I am adding QapTcha (http://demos.myjqueryplugins.com/qaptcha/) to a website that I am working on and I'm getting the error "Uncaught TypeError: Cannot read property 'error' of null". What's weird to me is everything is working perfectly locally. No errors or anything. Once I uploaded via ftp to the live server, I get the above error. Below is the block of code that seems to be generating the error: Slider.draggable({ revert: function(){ if(opts.autoRevert) { if(parseInt(Slider.css("left")) (bgSlider.width()-Slider.width()-10)) return false; else return true; } }, containment: bgSlider, axis:'x', stop: function(event,ui){ if(ui.position.left (bgSlider.width()-Slider.width()-10)) { // set the SESSION iQaptcha in PHP file $.post(opts.PHPfile,{ action : 'qaptcha', qaptcha_key : inputQapTcha.attr('name') }, function(data) { if(!data.error) Uncaught TypeError: Cannot read property 'error' of null { Slider.draggable('disable').css('cursor','default'); inputQapTcha.val(''); TxtStatus.text(opts.txtUnlock).addClass('dropSuccess').removeClass('dropError'); form.find('input[type=\'submit\']').removeAttr('disabled'); if(opts.autoSubmit) form.find('input[type=\'submit\']').trigger('click'); } },'json'); } } }); I'm not really sure what's going on as to why it works locally and not on the server. Any help/ suggestions would be appreciated. Thanks

    Read the article

  • C# 4.0 Optional/Named Parameters Beginner&rsquo;s Tutorial

    - by mbcrump
    One of the interesting features of C# 4.0 is for both named and optional arguments.  They are often very useful together, but are quite actually two different things.  Optional arguments gives us the ability to omit arguments to method invocations. Named arguments allows us to specify the arguments by name instead of by position.  Code using the named parameters are often more readable than code relying on argument position.  These features were long overdue, especially in regards to COM interop. Below, I have included some examples to help you understand them more in depth. Please remember to target the .NET 4 Framework when trying these samples. Code Snippet using System;   namespace ConsoleApplication3 {     class Program     {         static void Main(string[] args)         {               //C# 4.0 Optional/Named Parameters Tutorial               Foo();                              //Prints to the console | Return Nothing 0             Foo("Print Something");             //Prints to the console | Print Something 0             Foo("Print Something", 1);          //Prints to the console | Print Something 1             Foo(x: "Print Something", i: 5);    //Prints to the console | Print Something 5             Foo(i: 5, x: "Print Something");    //Prints to the console | Print Something 5             Foo("Print Something", i: 5);       //Prints to the console | Print Something 5             Foo2(i3: 77);                       //Prints to the console | 77         //  Foo(x:"Print Something", 5);        //Positional parameters must come before named arguments. This will error out.             Console.Read();         }           static void Foo(string x = "Return Nothing", int i = 0)         {             Console.WriteLine(x + " " + i + Environment.NewLine);         }           static void Foo2(int i = 1, int i2 = 2, int i3 = 3, int i4 = 4)         {             Console.WriteLine(i3);         }     } }

    Read the article

  • Collision within a poly

    - by G1i1ch
    For an html5 engine I'm making, for speed I'm using a path poly. I'm having trouble trying to find ways to get collision with the walls of the poly. To make it simple I just have a vector for the object and an array of vectors for the poly. I'm using Cartesian vectors and they're 2d. Say poly = [[550,0],[169,523],[-444,323],[-444,-323],[169,-523]], it's just a pentagon I generated. The object that will collide is object, object.pos is it's position and object.vel is it's velocity. They're both 2d vectors too. I've had some success to get it to find a collision, but it's just black box code I ripped from a c++ example. It's very obscure inside and all it does though is return true/false and doesn't return what vertices are collided or collision point, I'd really like to be able to understand this and make my own so I can have more meaningful collision. I'll tackle that later though. Again the question is just how does one find a collision to walls of a poly given you know the poly vertices and the object's position + velocity? If more info is needed please let me know. And if all anyone can do is point me to the right direction that's great.

    Read the article

  • Confused about javascript module pattern implementation

    - by Damon
    I have a class written on a project I'm working on that I've been told is using the module pattern, but it's doing things a little differently than the examples I've seen. It basically takes this form: (function ($, document, window, undefined) { var module = { foo : bar, aMethod : function (arg) { className.bMethod(arg); }, bMethod : function (arg) { console.log('spoons'); } }; window.ajaxTable = ajaxTable; })(jQuery, document, window); I get what's going on here. But I'm not sure how this relates to most of the definitions I've seen of the module (or revealing?) module pattern. like this one from briancray var module = (function () { // private variables and functions var foo = 'bar'; // constructor var module = function () { }; // prototype module.prototype = { constructor: module, something: function () { } }; // return module return module; })(); var my_module = new module(); Is the first example basically like the second except everything is in the constructor? I'm just wrapping my head around patterns and the little things at the beginnings and endings always make me not sure what I should be doing.

    Read the article

  • sqlplus: Running "set lines" and "set pagesize" automatially

    - by katsumii
    This is a followup to my previous entry. Using the full tty real estate with sqlplus (INOUE Katsumi @ Tokyo) 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. Here's another but again kludgy implementation. First this is the alias. alias sqlplus="rlwrap -z ~/sqlplus.filter sqlplus" And this is the file content. #!/usr/bin/env perl use lib ($ENV{RLWRAP_FILTERDIR} or "."); use RlwrapFilter; use POSIX qw(:signal_h); use strict; my $filter = new RlwrapFilter; $filter -> prompt_handler(\&prompt); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; sub winchHandler { $filter -> input_handler(\&input); sigprocmask(SIG_UNBLOCK, POSIX::SigSet->new(28)); $SIG{WINCH} = 'winchHandler'; $filter -> run; } sub input { $filter -> input_handler(undef); return `resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"` . $_; } sub prompt { if ($_ =~ "SQL> ") { $filter -> input_handler(\&input); $filter -> prompt_handler(undef); } return $_; } I hope I can compare these 2 implementations after testing more and getting some feedbacks.

    Read the article

  • Best way to blend colors in tile lighting? (XNA)

    - by Lemoncreme
    I have made a color, decent, recursive, fast tile lighting system in my game. It does everything I need except one thing: different colors are not blended at all: Here is my color blend code: return (new Color( (byte)MathHelper.Clamp(color.R / factor, 0, 255), (byte)MathHelper.Clamp(color.G / factor, 0, 255), (byte)MathHelper.Clamp(color.B / factor, 0, 255))); As you can see it does not take the already in place color into account. color is the color of the previous light, which is weakened by the above code by factor. If I wanted to blend using the color already in place, I would use the variable blend. Here is an example of a blend that I tried that failed, using blend: return (new Color( (byte)MathHelper.Clamp(((color.R + blend.R) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.G + blend.G) / 2) / factor, 0, 255), (byte)MathHelper.Clamp(((color.B + blend.B) / 2) / factor, 0, 255))); This color blend produces inaccurate and strange results. I need a blend that is accurate, like the first example, that blends the two colors together. What is the best way to do this?

    Read the article

  • Ways to organize interface and implementation in C++

    - by Felix Dombek
    I've seen that there are several different paradigms in C++ concerning what goes into the header file and what to the cpp file. AFAIK, most people, especially those from a C background, do: foo.h class foo { private: int mem; int bar(); public: foo(); foo(const foo&); foo& operator=(foo); ~foo(); } foo.cpp #include foo.h foo::bar() { return mem; } foo::foo() { mem = 42; } foo::foo(const foo& f) { mem = f.mem; } foo::operator=(foo f) { mem = f.mem; } foo::~foo() {} int main(int argc, char *argv[]) { foo f; } However, my lecturers usually teach C++ to beginners like this: foo.h class foo { private: int mem; int bar() { return mem; } public: foo() { mem = 42; } foo(const foo& f) { mem = f.mem; } foo& operator=(foo f) { mem = f.mem; } ~foo() {} } foo.cpp #include foo.h int main(int argc, char* argv[]) { foo f; } // other global helper functions, DLL exports, and whatnot Originally coming from Java, I have also always stuck to this second way for several reasons, such as that I only have to change something in one place if the interface or method names change, that I like the different indentation of things in classes when I look at their implementation, and that I find names more readable as foo compared to foo::foo. I want to collect pro's and con's for either way. Maybe there are even still other ways? One disadvantage of my way is of course the need for occasional forward declarations.

    Read the article

  • Design for object with optional and modifiable attributtes?

    - by Ikuzen
    I've been using the Builder pattern to create objects with a large number of attributes, where most of them are optional. But up until now, I've defined them as final, as recommended by Joshua Block and other authors, and haven't needed to change their values. I am wondering what should I do though if I need a class with a substantial number of optional but non-final (mutable) attributes? My Builder pattern code looks like this: public class Example { //All possible parameters (optional or not) private final int param1; private final int param2; //Builder class public static class Builder { private final int param1; //Required parameters private int param2 = 0; //Optional parameters - initialized to default //Builder constructor public Builder (int param1) { this.param1 = param1; } //Setter-like methods for optional parameters public Builder param2(int value) { param2 = value; return this; } //build() method public Example build() { return new Example(this); } } //Private constructor private Example(Builder builder) { param1 = builder.param1; param2 = builder.param2; } } Can I just remove the final keyword from the declaration to be able to access the attributes externally (through normal setters, for example)? Or is there a creational pattern that allows optional but non-final attributes that would be better suited in this case?

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • How to automatically render all opaque meshes with a specific shader?

    - by dsilva.vinicius
    I have a specular outline shader that I want to be used on all opaque meshes of the scene whenever a specific camera renders. The shader is working properly when it is manually applied to some material. The shader is as follows: Shader "Custom/Outline" { Properties { _Color ("Main Color", Color) = (.5,.5,.5,1) _OutlineColor ("Outline Color", Color) = (1,0.5,0,1) _Outline ("Outline width", Range (0.0, 0.1)) = .05 _SpecColor ("Specular Color", Color) = (0.5, 0.5, 0.5, 1) _Shininess ("Shininess", Range (0.03, 1)) = 0.078125 _MainTex ("Base (RGB) Gloss (A)", 2D) = "white" {} } SubShader { Tags { "Queue"="Overlay" "RenderType"="Opaque" } Pass { Name "OUTLINE" Tags { "LightMode" = "Always" } Cull Off ZWrite Off // Uncomment to show outline always. //ZTest Always CGPROGRAM #pragma target 3.0 #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float3 normal : NORMAL; }; struct v2f { float4 pos : POSITION; float4 color : COLOR; }; float _Outline; float4 _OutlineColor; v2f vert(appdata v) { // just make a copy of incoming vertex data but scaled according to normal direction v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); float3 norm = mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal); float2 offset = TransformViewToProjection(norm.xy); o.pos.xy += offset * o.pos.z * _Outline; o.color = _OutlineColor; return o; } float4 frag(v2f fromVert) : COLOR { return fromVert.color; } ENDCG } UsePass "Specular/FORWARD" } FallBack "Specular" } The camera used fot the effect has just a script component which setups the shader replacement: using UnityEngine; using System.Collections; public class DetectiveEffect : MonoBehaviour { public Shader EffectShader; // Use this for initialization void Start () { this.camera.SetReplacementShader(EffectShader, "RenderType=Opaque"); } // Update is called once per frame void Update () { } } Unfortunately, whenever I use this camera I just see the background color. Any ideas?

    Read the article

  • Using runtime generic type reflection to build a smarter DAO

    - by kerry
    Have you ever wished you could get the runtime type of your generic class? I wonder why they didn’t put this in the language. It is possible, however, with reflection: Consider a data access object (DAO) (note: I had to use brackets b/c the arrows were messing with wordpress): public interface Identifiable { public Long getId(); } public interface Dao { public T findById(Long id); public void save(T obj); public void delete(T obj); } Using reflection, we can create a DAO implementation base class, HibernateDao, that will work for any object: import java.lang.reflect.Field; import java.lang.reflect.ParameterizedType; public class HibernateDao implements Dao { private final Class clazz; public HibernateDao(Session session) { // the magic ParameterizedType parameterizedType = (ParameterizedType) clazz.getGenericSuperclass(); return (Class) parameterizedType.getActualTypeArguments()[0]; } public T findById(Long id) { return session.get(clazz, id); } public void save(T obj) { session.saveOrUpdate(obj); } public void delete(T obj) { session.delete(obj); } } Then, all we have to do is extend from the class: public class BookDaoHibernateImpl extends HibernateDao { }

    Read the article

  • Is there a better way to consume an ASP.NET Web API call in an MVC controller?

    - by davidisawesome
    In a new project I am creating for my work I am creating a fairly large ASP.NET Web API. The api will be in a separate visual studio solution that also contains all of my business logic and database interactions, Model classes as well. In the test application I am creating (which is asp.net mvc4), I want to be able to hit an api url I defined from the control and cast the return JSON to a Model class. The reason behind this is that I want to take advantage of strongly typing my views to a Model. This is all still in a proof of concept stage, so I have not done any performance testing on it, but I am curious if what I am doing is a good practice, or if I am crazy for even going down this route. Here is the code on the client controller: public class HomeController : Controller { protected string dashboardUrlBase = "http://localhost/webapi/api/StudentDashboard/"; public ActionResult Index() //This view is strongly typed against User { //testing against Joe Bob string adSAMName = "jBob"; WebClient client = new WebClient(); string url = dashboardUrlBase + "GetUserRecord?userName=" + adSAMName; //'User' is a Model class that I have defined. User result = JsonConvert.DeserializeObject<User>(client.DownloadString(url)); return View(result); } . . . } If I choose to go this route another thing to note is I am loading several partial views in this page (as I will also do in subsequent pages). The partial views are loaded via an $.ajax call that hits this controller and does basically the same thing as the code above: Instantiate a new WebClient Define the Url to hit Deserialize the result and cast it to a Model Class. So it is possible (and likely) I could be performing the same actions 4-5 times for a single page. Is there a better method to do this that will: Let me keep strongly typed views. Do my work on the server rather than on the client (this is just a preference since I can write C# faster than I can write javascript).

    Read the article

  • Validate that a Checkbox is checked using javascript

    - by H(at)Ni
    I was facing a challenge yesterday that I was creating a Visual webpart and I wanted to validate the a submit button is only visible if the user checked a "I agree to terms" checkbox. Something was weired that I tested my code on a normal asp.net website and it worked perfectly while it had a different behaviour inside the webpart which is whenever I check the checkbox, the button is enabled but it will not fire the asp.net validators in client side. It posts back the page and then the validators appear after that. So, I tried to change my type of thinking and I reached a different solution is that to call a javascript function whenever the button is clicked and then check if the checkbox is clicked or not. To illustrate more, here are an example to what I'm saying: 1. Button in aspx page: <asp:Button OnClientClick="CheckForCondition();"  ValidationGroup="CompaniesSection" ID="btnCompaniesSubmit"                         runat="server" Text="Submit" /> 2. CheckForCondition() function: <script language="javascript" type="text/javascript">                         function CheckForCondition() {                             if ($jq('#<%= ChkCompanyCheck.ClientID %>:checked').val() == undefined) {                                 $jq('#lblCheckBox').show();                                 return false;                             }                             else {                                 $jq('#lblCheckBox').hide();                                 return true;                             }                         }                      </script> 3. lblCheckBox is simply a label that shows a red asterisk beside the checkbox to indicate that it's a required field. <label id="lblCheckBox" style="color:Red;display:none">*</label>

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Kinect Click counter function

    - by Sweta Dwivedi
    So i have the following kinect click function which will check if the hand is within the bounds then it will click with a counter . . however there is a slight problem . .the first few button clicks work fine.. but after it clicks one of the buttons it changes the game state and immediately clicks the other button without the counter reaching 200. . . Kinect click is a method in the button class. . .and each button inside a list can access the Kinect click method. . . public bool KinectClick(int x,int y) { if ((x >= position.X && x <= position.X + position.Width) && (y >= position.Y && y <= position.Y + position.Height)) { counter++; if (counter > 200) { counter = 0; return true; } } else { counter = 0; } return false; } I call to check if this property is true in the Game update method to act as a button click. . foreach(Button g_t in Game_theme) { if ((g_t.KinectClick(x_c, y_c) == true || g_t.ButtonClicked() == true) && g_t.name == "animoe") { Selected_anim = true; currentGameState = GameState.InGame; } if ((g_t.KinectClick(x_c, y_c) == true || g_t.ButtonClicked() == true) && g_t.name == "planet") { Selected_planet = true; currentGameState = GameState.InGame; }

    Read the article

  • Benefits of Server-side Coding

    There are numerous advantages to server scripting languages over client side languages in regards to creating web sites that are more compelling compared to a standard static site. Server side scripting are scripts that are executed on a web server during the compilation of data to return to a client. These scripts allow developers to modify the content that is being sent to the user prior to the return of the data to the user as well as store information about the user. In addition, server side scripts allow for a controllable environment in which they can be executed. This cannot be said for client side languages because the developer cannot control the users’ environment compared to a web server. Some users may turn off client scripts, some may be only allow limited access on the system and others may be able to gain full control of the environment.  I have been developing web applications for over 9 years, and I have used server side languages for most of the applications I have built.  Here is a list of common things I have developed with server side scripts. List of Common Generic Functionality Send Email FTP Files Security/ Access Control Encryption URL rewriting Data Access Data Creation I/O Access The one important feature server side languages will help me with on my website is Data Access because my component will be backed with a SQL server database. I believe that form validation is one instance where I might see server-side scripts and JavaScript used interchangeably because it does not matter how or where the data is validated as long as the data that gets inserted is valid. However, I would have to say that my personal experience would have to sway me in deciding what type of languages to use for form validation because they both have advantages and disadvantages based on the each situation.

    Read the article

  • Java : 2D Collision Detection

    - by neko
    I'm been working on 2D rectangle collision for weeks and still cannot get this problem fixed. The problem I'm having is how to adjust a player to obstacles when it collides. I'm referencing this link. The player sometime does not get adjusted to obstacles. Also, it sometimes stuck in obstacle guy after colliding. Here, the player and the obstacle are inheriting super class Sprite I can detect collision between the two rectangles and the point by ; public Point getSpriteCollision(Sprite sprite, double newX, double newY) { // set each rectangle Rectangle spriteRectA = new Rectangle( (int)getPosX(), (int)getPosY(), getWidth(), getHeight()); Rectangle spriteRectB = new Rectangle( (int)sprite.getPosX(), (int)sprite.getPosY(), sprite.getWidth(), sprite.getHeight()); // if a sprite is colliding with the other sprite if (spriteRectA.intersects(spriteRectB)){ System.out.println("Colliding"); return new Point((int)getPosX(), (int)getPosY()); } return null; } and to adjust sprites after a collision: // Update the sprite's conditions public void update() { // only the player is moving for simplicity // collision detection on x-axis (just x-axis collision detection at this moment) double newX = x + vx; // calculate the x-coordinate of sprite move Point sprite = getSpriteCollision(map.getSprite().get(1), newX, y);// collision coordinates (x,y) if (sprite == null) { // if the player is no colliding with obstacle guy x = newX; // move } else { // if collided if (vx > 0) { // if the player was moving from left to right x = (sprite.x - vx); // this works but a bit strange } else if (vx < 0) { x = (sprite.x + vx); // there's something wrong with this too } } vx=0; y+=vy; vy=0; } I think there is something wrong in update() but cannot fix it. Now I only have a collision with the player and an obstacle guy but in future, I'm planning to have more of them and making them all collide with each other. What would be a good way to do it? Thanks in advance.

    Read the article

  • Do you leverage the benefits of the open-closed principle?

    - by Kaleb Pederson
    The open-closed principle (OCP) states that an object should be open for extension but closed for modification. I believe I understand it and use it in conjunction with SRP to create classes that do only one thing. And, I try to create many small methods that make it possible to extract out all the behavior controls into methods that may be extended or overridden in some subclass. Thus, I end up with classes that have many extension points, be it through: dependency injection and composition, events, delegation, etc. Consider the following a simple, extendable class: class PaycheckCalculator { // ... protected decimal GetOvertimeFactor() { return 2.0M; } } Now say, for example, that the OvertimeFactor changes to 1.5. Since the above class was designed to be extended, I can easily subclass and return a different OvertimeFactor. But... despite the class being designed for extension and adhering to OCP, I'll modify the single method in question, rather than subclassing and overridding the method in question and then re-wiring my objects in my IoC container. As a result I've violated part of what OCP attempts to accomplish. It feels like I'm just being lazy because the above is a bit easier. Am I misunderstanding OCP? Should I really be doing something different? Do you leverage the benefits of OCP differently? Update: based on the answers it looks like this contrived example is a poor one for a number of different reasons. The main intent of the example was to demonstrate that the class was designed to be extended by providing methods that when overridden would alter the behavior of public methods without the need for changing internal or private code. Still, I definitely misunderstood OCP.

    Read the article

  • Dynamically load images inside jar

    - by Rahat Ahmed
    I'm using Slick2d for a game, and while it runs fine in Eclipse, i'm trying to figure out how to make it work when exported to a runnable .jar. I have it set up to where I load every image located in the res/ directory. Here's the code /** * Loads all .png images located in source folders. * @throws SlickException */ public static void init() throws SlickException { loadedImages = new HashMap<>(); try { URI uri = new URI(ResourceLoader.getResource("res").toString()); File[] files = new File(uri).listFiles(new FilenameFilter(){ @Override public boolean accept(File dir, String name) { if(name.endsWith(".png")) return true; return false; } }); System.out.println("Naming filenames now."); for(File f:files) { System.out.println(f.getName()); FileInputStream fis = new FileInputStream(f); Image image = new Image(fis, f.getName(), false); loadedImages.put(f.getName(), image); } } catch (URISyntaxException | FileNotFoundException e) { System.err.println("UNABLE TO LOAD IMAGES FROM RES FOLDER!"); e.printStackTrace(); } font = new AngelCodeFont("res/bitmapfont.fnt",Art.get("bitmapfont.png")); } Now the obvious problem is the line URI uri = new URI(ResourceLoader.getResource("res").toString()); If I pack the res folder into the .jar there will not be a res folder on the filesystem. How can I iterate through all the images in the compiled .jar itself, or what is a better system to automatically load all images?

    Read the article

  • Basic 3D Collision detection in XNA 4.0

    - by NDraskovic
    I have a problem with detecting collision between 2 models using BoundingSpheres in XNA 4.0. The code I'm using i very simple: private bool IsCollision(Model model1, Matrix world1, Model model2, Matrix world2) { for (int meshIndex1 = 0; meshIndex1 < model1.Meshes.Count; meshIndex1++) { BoundingSphere sphere1 = model1.Meshes[meshIndex1].BoundingSphere; sphere1 = sphere1.Transform(world1); for (int meshIndex2 = 0; meshIndex2 < model2.Meshes.Count; meshIndex2++) { BoundingSphere sphere2 = model2.Meshes[meshIndex2].BoundingSphere; sphere2 = sphere2.Transform(world2); if (sphere1.Intersects(sphere2)) return true; } } return false; } The problem I'm getting is that when I call this method from the Update method, the program behaves as if this method always returns true value (which of course is not correct). The code for calling is very simple (although this is only the test code): if (IsCollision(model1, worldModel1, model2, worldModel2)) { Window.Title = "Intersects"; } What is causing this?

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >