Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 596/617 | < Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >

  • Explicitly instantiating a generic member function of a generic structure

    - by Dennis Zickefoose
    I have a structure with a template parameter, Stream. Within that structure, there is a function with its own template parameter, Type. If I try to force a specific instance of the function to be generated and called, it works fine, if I am in a context where the exact type of the structure is known. If not, I get a compile error. This feels like a situation where I'm missing a typename, but there are no nested types. I suspect I'm missing something fundamental, but I've been staring at this code for so long all I see are redheads, and frankly writing code that uses templates has never been my forte. The following is the simplest example I could come up with that illustrates the issue. #include <iostream> template<typename Stream> struct Printer { Stream& str; Printer(Stream& str_) : str(str_) { } template<typename Type> Stream& Exec(const Type& t) { return str << t << std::endl; } }; template<typename Stream, typename Type> void Test1(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); /****** vvv This is the line the compiler doesn't like vvv ******/ out.Exec<bool>(t); /****** ^^^ That is the line the compiler doesn't like ^^^ ******/ } template<typename Type> void Test2(const Type& t) { Printer<std::ostream> out = Printer<std::ostream>(std::cout); out.Exec<bool>(t); } template<typename Stream, typename Type> void Test3(Stream& str, const Type& t) { Printer<Stream> out = Printer<Stream>(str); out.Exec(t); } int main() { Test2(5); Test3(std::cout, 5); return 0; } As it is written, gcc-4.4 gives the following: test.cpp: In function 'void Test1(Stream&, const Type&)': test.cpp:22: error: expected primary-expression before 'bool' test.cpp:22: error: expected ';' before 'bool' Test2 and Test3 both compile cleanly, and if I comment out Test1 the program executes, and I get "1 5" as I expect. So it looks like there's nothing wrong with the idea of what I want to do, but I've botched something in the implementation. If anybody could shed some light on what I'm overlooking, it would be greatly appreciated.

    Read the article

  • jquery-ui .draggable is not a function error

    - by niczoom
    I am getting the following error (using Firefox 3.5.9): $("#dragMe_" + myCount).draggable is not a function $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); Line 231 http://www.liamharding.com/pgi/pgi.php Link to page in question : http://www.liamharding.com/pgi/pgi.php For example, click the 2 checkbox's 'R25 + R50 Random Walk' then click Show/Refresh Graphs. Two graphs should be displayed, both with draggable thin horizontal red lines. Re-open the options panel and de-select R50 Random Walk, now click Show/Refresh Graphs again, 1 graph is removed and the other updated; now re-select R50 Random Walk and click Show/Refresh, you will find the still checked R25 graph gets updated ok the above error occurs and i cant figure out why. Initially, when displaying the first 2 graphs it uses the same code and it works just fine. The error occurs on this line: //********* ERROR OCCURS HERE ********** $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); Here is the code for the Show/Refresh Graphs.click() event: $("#btnShowGraphs").click(function(){ // Hide 'Options' panel (only if open AND an index is checked) if (IsOptionsPanelOpen && ($("#indexCheck:checked").length != 0)) {$('#optionImgDiv').click();}; var myCount = 0; var divIsNew = false; var gif_loader_small = '<div id="gif_loader_small"></div>'; var gif_loader_big = '<div id="gif_loader_big"></div>'; $("input:checkbox[id=indexCheck]").each(function() { if (this.checked) { // check for an existing wrapper div for the current forex item, using the current checkbox value (foxex name) if ( $("#"+this.value).length == 0 ) { console.log("New 'graphContainer' div : "+this.value); divIsNew = true; // Create new divs for graph image, drag bar and heading var $structure = " \ <li id=\""+this.value+"\" class=\"graphContainer\"> \ <div id=\"dragMe_"+myCount+"\" class=\"dragMe\"></div> \ <div id=\"image_"+myCount+"\" class=\"image\"></div> \ <div id=\"heading_"+myCount+"\" class=\"heading\"></div> \ </li> \ "; $('#graphResults').append($structure); // Hide dragMe DIV $('#dragMe_'+myCount).hide(); // Make 'dragMe' draggable div //********* ERROR OCCURS HERE ********** $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); } // Display small loading gif $(gif_loader_small).clone().appendTo( $(this).parent() ); // Display large circular loading gif var $loader = $(gif_loader_big); // add temporary css attributes onto existing graph divs as they need to be displayed diffrently if(!divIsNew){ console.log("Reposition existing 'gif_loader_big' div"); $loader = $(gif_loader_big).css({ "position" : "absolute", "top" : "35%", "opacity" : ".85"}); } // add newly styled big-loader-gif to index div $loader.clone().prependTo( $("#"+this.value) ); // Call function to fetch image using ajax get_graph(this, myCount, divIsNew); } else { // REMOVE 'graphContainer' DIVS NOT CHECKED // check for div existance if ( $("#"+this.value).length != 0 ) { console.log("DESTROY: #dragMe_"+myCount+", REMOVE: #"+this.value); // DESTROY draggable //$("#dragMe_"+myCount).draggable("destroy"); // remove div $("#"+this.value).remove(); } } // reset counters and other variables myCount++; divIsNew = false; console.log("Complete: "+this.value+", NEXT index"); }); });

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Policy-based template design: How to access certain policies of the class?

    - by dehmann
    I have a class that uses several policies that are templated. It is called Dish in the following example. I store many of these Dishes in a vector (using a pointer to simple base class), but then I'd like to extract and use them. But I don't know their exact types. Here is the code; it's a bit long, but really simple: #include <iostream> #include <vector> struct DishBase { int id; DishBase(int i) : id(i) {} }; std::ostream& operator<<(std::ostream& out, const DishBase& d) { out << d.id; return out; } // Policy-based class: template<class Appetizer, class Main, class Dessert> class Dish : public DishBase { Appetizer appetizer_; Main main_; Dessert dessert_; public: Dish(int id) : DishBase(id) {} const Appetizer& get_appetizer() { return appetizer_; } const Main& get_main() { return main_; } const Dessert& get_dessert() { return dessert_; } }; struct Storage { typedef DishBase* value_type; typedef std::vector<value_type> Container; typedef Container::const_iterator const_iterator; Container container; Storage() { container.push_back(new Dish<int,double,float>(0)); container.push_back(new Dish<double,int,double>(1)); container.push_back(new Dish<int,int,int>(2)); } ~Storage() { // delete objects } const_iterator begin() { return container.begin(); } const_iterator end() { return container.end(); } }; int main() { Storage s; for(Storage::const_iterator it = s.begin(); it != s.end(); ++it){ std::cout << **it << std::endl; std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? } return 0; } The tricky part is here, in the main() function: std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? How can I access the dessert? I don't even know the Dessert type (it is templated), let alone the complete type of the object that I'm getting from the storage. This is just a toy example, but I think my code reduces to this. I'd just like to pass those Dish classes around, and different parts of the code will access different parts of it (in the example: its appetizer, main dish, or dessert).

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • My server app works strangely. What could be the reason(s)?

    - by Poni
    Hi! I've written a server app (two parts actually; proxy server and a game server) using C++ (board game). It uses IOCP as the sockets interface. For that app I've also written a "client simulator" (hereafter "client") app that spawns many client connections, where each of them plays, in very high speed, getting the CPU to be 100% utilized. So, that's how it goes in terms of topology: Game server - holds the game state. Real players do not connect it directly but through the proxy server. When a player joins a game, the proxy actually asks for it on behalf of that player, and the game server spawns a "player instance" for that player, and from now on, every notification between the game server and the player is being passed through the proxy. Proxy server - holds TCP connections with the real players. Players communicate with the game server through it only. Client simulator - connects to the proxy only. When running the server (again, it's actually two server apps) & client locally it all works just fine. I'm talking about 40k+ player instances in which all of them are active in a game. On the other hand, when running the server remotely with, say, 1000 clients who play things getting strange. For example, I run it as said above. Then with Task Manager I kill the client simulator app ("End Process Tree"). Then it seems like the buffer of the remote server got modified by another thread, or in other words, a memory corruption has been occurred. The server crashes because it got an unknown message id (it's a custom protocol where each message has it's own unique number). To make things clear, here is how I run the apps: PC1 - game server and clients simulator (because the clients will connect the proxy). PC2 - proxy server. The strangest thing is this: Only the remote side gets "corrupted". Remote in terms that it's not the PC I use to code the app (VC++ 2008). Let's call the PC I use to code the apps "PC1". Now for example, if this time I ran the game server on PC1 (it means that proxy server on PC2 and clients simulator on PC1), then the proxy server crashes with an "unknown message id" error. Another variation is when I run the proxy server on PC1 (again, the dev machine), the game server and the clients simulator on PC2, then the game server on PC2 gets crashed. As for the IOCP config: The servers' internal connections use the default receive/send buffer sizes. Tried even with setting them to 1MB, but no luck. I have three PCs in total; 2 x Vista 64bit <<-- one of those is the dev machine. The other is connected through WiFi. 1 x WinXP 32bit They're all connected in a "full duplex" manner. What could be the reason? Tried about everything; Stack tracing, recording some actions (like read/write logging).. I want to stress that only the PC I'm not using to code the apps crashes (actually the server app "role" which is running on it - sometimes the game server and sometimes the proxy server). At first I thought that maybe the wireless PC has problems (it's wireless..) but: TCP has it's own mechanisms to make sure the packet is delivered properly. Also, a crash also happens when trying it with the two PCs that are physically connected (Vista vs. XP). Another option is that the Windows DLLs versions might have problems, but then again, one of the tests is Vista vs. Vista, and the other is Vista vs. XP. Any idea?

    Read the article

  • Resizing Images In a Table Using JavaScript

    - by Abluescarab
    Hey, all. I apologize beforehand if none of this makes sense, or if I sound incompetent... I've been making webpages for a while, but don't know any JavaScript. I am trying to make a webpage for a friend, and he requested if I could write some code to resize the images in the page based on the user's screen resolution. I did some research on this question, and it's kind of similar to this, this, and this. Unfortunately, those articles didn't answer my question entirely because none of the methods described worked. Right now, I have a three-column webpage with 10 images in a table in the left sidebar, and when I use percentages for their sizes, they don't resize right between monitors. As such, I'm trying to write a JavaScript function that changes their sizes after detecting the screen resolution. The code is stripped from my post if I put it all, so I'll just say that each image links to a website and uses a separate image to change color when you hover over it. Would I have to address both images to change their sizes correctly? I use a JavaScript function to switch between them. Anyway, I tried both methods in this article and neither worked for me. If it helps, I'm using Google Chrome, but I'm trying to make this page as cross-browser as possible. Here's the code I have so far in my JavaScript function: function resizeImages() { var w = window.width; var h = window.height; var yuk = document.getElementById('yuk').style; var wb = document.getElementById('wb').style; var tf = document.getElementById('tf').style; var lh = document.getElementById('lh').style; var ko = document.getElementById('ko').style; var gz = document.getElementById('gz').style; var fb = document.getElementById('fb').style; var eg = document.getElementById('eg').style; var dl = document.getElementById('dl').style; var da = document.getElementById('da').style; if (w = "800" && h = "600") { } else if (w = "1024" && h = "768") { } else if (w = "1152" && h = "864") { } else if (w = "1280" && h = "720") { } else if (w = "1280" && h = "768") { } else if (w = "1280" && h = "800") { } else if (w = "1280" && h = "960") { } else if (w = "1280" && h = "1024") { } } Yeah, I don't have much in it because I don't know if I'm doing it right yet. Is this a way I can detect the "width" and "height" properties of a window? The "yuk", "wb", etcetera are the images I'm trying to change the size of. To sum it up: I want to resize images based on screen resolution using JavaScript, but my research attempts have been... futile. I'm sorry if that was long-winded, but thanks ahead of time!

    Read the article

  • jQuery line 67 saying "TypeError: 'undefined' is not a function."

    - by dfdf
    var dbShell; function doLog(s){ /* setTimeout(function(){ console.log(s); }, 3000); */ } function dbErrorHandler(err){ alert("DB Error: "+err.message + "\nCode="+err.code); } function phoneReady(){ doLog("phoneReady"); //First, open our db dbShell = window.openDatabase("SimpleNotes", 2, "SimpleNotes", 1000000); doLog("db was opened"); //run transaction to create initial tables dbShell.transaction(setupTable,dbErrorHandler,getEntries); doLog("ran setup"); } //I just create our initial table - all one of em function setupTable(tx){ doLog("before execute sql..."); tx.executeSql("CREATE TABLE IF NOT EXISTS notes(id INTEGER PRIMARY KEY,title,body,updated)"); doLog("after execute sql..."); } //I handle getting entries from the db function getEntries() { //doLog("get entries"); dbShell.transaction(function(tx) { tx.executeSql("select id, title, body, updated from notes order by updated desc",[],renderEntries,dbErrorHandler); }, dbErrorHandler); } function renderEntries(tx,results){ doLog("render entries"); if (results.rows.length == 0) { $("#mainContent").html("<p>You currently do not have any notes.</p>"); } else { var s = ""; for(var i=0; i<results.rows.length; i++) { s += "<li><a href='edit.html?id="+results.rows.item(i).id + "'>" + results.rows.item(i).title + "</a></li>"; } $("#noteTitleList").html(s); $("#noteTitleList").listview("refresh"); } } function saveNote(note, cb) { //Sometimes you may want to jot down something quickly.... if(note.title == "") note.title = "[No Title]"; dbShell.transaction(function(tx) { if(note.id == "") tx.executeSql("insert into notes(title,body,updated) values(?,?,?)",[note.title,note.body, new Date()]); else tx.executeSql("update notes set title=?, body=?, updated=? where id=?",[note.title,note.body, new Date(), note.id]); }, dbErrorHandler,cb); } function init(){ document.addEventListener("deviceready", phoneReady, false); //handle form submission of a new/old note $("#editNoteForm").live("submit",function(e) { var data = {title:$("#noteTitle").val(), body:$("#noteBody").val(), id:$("#noteId").val() }; saveNote(data,function() { $.mobile.changePage("index.html",{reverse:true}); }); e.preventDefault(); }); //will run after initial show - handles regetting the list $("#homePage").live("pageshow", function() { getEntries(); }); //edit page logic needs to know to get old record (possible) $("#editPage").live("pageshow", function() { var loc = $(this).data("url"); if(loc.indexOf("?") >= 0) { var qs = loc.substr(loc.indexOf("?")+1,loc.length); var noteId = qs.split("=")[1]; //load the values $("#editFormSubmitButton").attr("disabled","disabled"); dbShell.transaction( function(tx) { tx.executeSql("select id,title,body from notes where id=?",[noteId],function(tx,results) { $("#noteId").val(results.rows.item(0).id); $("#noteTitle").val(results.rows.item(0).title); $("#noteBody").val(results.rows.item(0).body); $("#editFormSubmitButton").removeAttr("disabled"); }); }, dbErrorHandler); } else { $("#editFormSubmitButton").removeAttr("disabled"); } }); } Dats my code, awfully long, huh? Well anyways I got most of it from here, however I get an error on line 67 saying "TypeError: 'undefined' is not a function.". I'm using Steroids (phonegap-like) and testing dis on an iPhone simulator. I'm sure it uses some cordova for the database work. Thank you for your help :-)

    Read the article

  • Losing sessions on GlassFish

    - by synti
    I have a web application that logs users in a @SessionScoped managed bean. It's all the basic stuff, pretty much like this: users logs in using regular http form and gets redirect to user area (wich is protected using a filter). But if any resource on that area is accessed, the request somehow uses a new session, wich has no managed bean, no user, and the filter does his job, redirecting him to login page. Here's the login form: <h:form> <h:outputLabel for="email" value="Email "/> <p:inputText id="email" size="30" value="#{loginManager.email}"/> <h:outputLabel for="password" value="Password "/> <p:password id="password" size="12" value="#{loginManager.password}"/> <p:commandButton value="Login" action="#{loginManager.login()}"/> </h:form> The loginManager managed bean: @ManagedBean @SessionScoped public class LoginManager implements Serializable { @EJB private UserService userService; private User user; private String email; private String password; public String login() { user = userService.findBy(email, password); if (user == null) { // FacesMessage stuff } else { return "/user/welcome.xhtml?faces-redirect=true"; } } public String logout() { FacesContext.getCurrentInstance().getExternalContext().invalidateSession(); return "/index.xhtml?faces-redirect=true"; } // Getters, setters (no setter for user) and serialVersionUID And then comes the filter that protects the user area: @WebFilter(urlPatterns="/user/*", displayName="UserFilter") public class UserFilter implements Filter { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpSession session = ((HttpServletRequest)request).getSession(false); LoginManager loginManager = (LoginManager) session.getAttribute("loginManager"); if (loginManager == null || !loginManager.hasUser()) { HttpServletResponse resp = (HttpServletResponse) response; resp.sendRedirect("index.xhtml"); } final User user = loginManager.getUser(); if (user.isValid()) { chain.doFilter(request, response); } else { HttpServletResponse resp = (HttpServletResponse) response; resp.sendRedirect("index.xhtml"); } } The UserService is just a stateless EJB that handles persistence. Part of the JSF for user area: <h:form> <p:panelMenu> <p:submenu label="Items"> <p:menuitem value="Add item" action="#{userItens.addItems}" ajax="false"/> <p:menuitem value="My items" /> </p:submenu> </p:panelMenu> </h:form> And finally the userItens managed bean. @ManagedBean @RequestScoped public class UserItens { private User user; @PostConstruct private void init() { HttpSession session = (HttpSession) FacesContext.getCurrentInstance() .getExternalContext().getSession(false); LoginManager loginManager = (LoginManager) session.getAttribute("loginManager"); if (loginManager != null) user = loginManager.getUser(); } public String addItems() { // Doesn't get here. Seems like UserFilter comes first, doesn't find // an user and redirects. } I'm using glassfish and session timeout is now on 0.

    Read the article

  • C++/msvc6 application crashes due to heap corruption, any hints?

    - by David Alfonso
    Hello all, let me say first that I'm writing this question after months of trying to find out the root of a crash happening in our application. I'll try to detail as much as possible what I've already found out about it. About the application It runs on Windows XP Professional SP2. It's built with Microsoft Visual C++ 6.0 with Service Pack 6. It's MFC based. It uses several external dlls (e.g. Xerces, ZLib or ACE). It has high performance requirements. It does a lot of network and hard disk I/O, but it's also cpu intensive. It has an exception handling mechanism which generates a minidump when an unhandled exception occurs. Facts about the crash It only happens on multiprocessor/multicore machines and under heavy loads of work. It happens at random (neither we nor our client have found a pattern yet). We cannot reproduce the crash on our testing lab. It only happens on some production systems (but always in multicore machines) It always ends up crashing at the same point, although the complete stack is not always the same. Let me add the stack of the crashing thread (obtained using WinDbg, sorry we don't have symbols) ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 030af6c8 7c9206eb 77bfc3c9 01a80000 00224bc3 MyApplication+0x2a85b9 030af960 7c91e9c0 7c92901b 00000ab4 00000000 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030af98c 7c9205c8 00000001 00000000 00000000 ntdll!ZwWaitForSingleObject+0xc (FPO: [3,0,0]) 030af9c0 7c920551 01a80898 7c92056d 313adfb0 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030afa8c 4ba3ae96 000307da 00130005 00040012 ntdll!RtlFreeHeap+0x1e9 (FPO: [Non-Fpo]) 030afacc 77bfc2e3 0214e384 3087c8d8 02151030 0x4ba3ae96 030afb00 7c91e306 7c80bfc1 00000948 00000001 msvcrt!free+0xc8 (FPO: [Non-Fpo]) 030afb20 0042965b 030afcc0 0214d780 02151218 ntdll!ZwReleaseSemaphore+0xc (FPO: [3,0,0]) 030afb7c 7c9206eb 02e6c471 02ea0000 00000008 MyApplication+0x2965b 030afe60 7c9205c8 02151248 030aff38 7c920551 ntdll!RtlAllocateHeap+0xeac (FPO: [Non-Fpo]) 030afe74 7c92056d 0210bfb8 02151250 02151250 ntdll!RtlpFreeToHeapLookaside+0x22 (FPO: [2,0,4]) 030aff38 77bfc2de 01a80000 00000000 77bfc2e3 ntdll!RtlFreeHeap+0x647 (FPO: [Non-Fpo]) 7c92056d c5ffffff ce7c94be ff7c94be 00ffffff msvcrt!free+0xc3 (FPO: [Non-Fpo]) 7c920575 ff7c94be 00ffffff 12000000 907c94be 0xc5ffffff 7c920579 00ffffff 12000000 907c94be 90909090 0xff7c94be *** WARNING: Unable to verify checksum for xerces-c_2_7.dll *** ERROR: Symbol file could not be found. Defaulted to export symbols for xerces-c_2_7.dll - 7c92057d 12000000 907c94be 90909090 8b55ff8b MyApplication+0xbfffff 7c920581 907c94be 90909090 8b55ff8b 08458bec xerces_c_2_7 7c920585 90909090 8b55ff8b 08458bec 04408b66 0x907c94be 7c920589 8b55ff8b 08458bec 04408b66 0004c25d 0x90909090 7c92058d 08458bec 04408b66 0004c25d 90909090 0x8b55ff8b The address MyApplication+0x2a85b9 corresponds to a call to erase() of a std::list. What I have tried so far Reviewing all the code related to the point where the crash ends happening. Trying to enable pageheap on our testing lab though nothing useful has been found by now. We have substituted the std::list for a C array and then it crashes in other part of the code (although it is related code, it's not in the code where the old list resided). Coincidentally, now it crashes in another erase, though this time of a std::multiset. Let me copy the stack contained in the dump: ntdll.dll!_RtlpCoalesceFreeBlocks@16() + 0x124e bytes ntdll.dll!_RtlFreeHeap@12() + 0x91f bytes msvcrt.dll!_free() + 0xc3 bytes MyApplication.exe!006a4fda() [Frames below may be incorrect and/or missing, no symbols loaded for MyApplication.exe] MyApplication.exe!0069f305() ntdll.dll!_NtFreeVirtualMemory@16() + 0xc bytes ntdll.dll!_RtlpSecMemFreeVirtualMemory@16() + 0x1b bytes ntdll.dll!_ZwWaitForSingleObject@12() + 0xc bytes ntdll.dll!_RtlpFreeToHeapLookaside@8() + 0x26 bytes ntdll.dll!_RtlFreeHeap@12() + 0x114 bytes msvcrt.dll!_free() + 0xc3 bytes c5ffffff() Possible solutions (that I'm aware of) which cannot be applied "Migrate the application to a newer compiler": We are working on this but It's not a solution at the moment. "Enable pageheap (normal or full)": We can't enable pageheap on production machines as this affects performance heavily. I think that's all I remember now, if I have forgotten something I'll add it asap. If you can give me some hint or propose some possible solution, don't hesitate to answer! Thank you in advance for your time and advice.

    Read the article

  • Can this be improved? Scrubbing of dangerous html tags.

    - by chobo2
    I been finding that for something that I consider pretty import there is very little information or libraries on how to deal with this problem. I found this while searching. I really don't know all the million ways that a hacker could try to insert the dangerous tags. I have a rich html editor so I need to keep non dangerous tags but strip out bad ones. So is this script missing anything? It uses html agility pack. public string ScrubHTML(string html) { HtmlDocument doc = new HtmlDocument(); doc.LoadHtml(html); //Remove potentially harmful elements HtmlNodeCollection nc = doc.DocumentNode.SelectNodes("//script|//link|//iframe|//frameset|//frame|//applet|//object|//embed"); if (nc != null) { foreach (HtmlNode node in nc) { node.ParentNode.RemoveChild(node, false); } } //remove hrefs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//a[starts-with(translate(@href, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("href", "#"); } } //remove img with refs to java/j/vbscript URLs nc = doc.DocumentNode.SelectNodes("//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'javascript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'jscript')]|//img[starts-with(translate(@src, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'vbscript')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.SetAttributeValue("src", "#"); } } //remove on<Event> handlers from all tags nc = doc.DocumentNode.SelectNodes("//*[@onclick or @onmouseover or @onfocus or @onblur or @onmouseout or @ondoubleclick or @onload or @onunload]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("onFocus"); node.Attributes.Remove("onBlur"); node.Attributes.Remove("onClick"); node.Attributes.Remove("onMouseOver"); node.Attributes.Remove("onMouseOut"); node.Attributes.Remove("onDoubleClick"); node.Attributes.Remove("onLoad"); node.Attributes.Remove("onUnload"); } } // remove any style attributes that contain the word expression (IE evaluates this as script) nc = doc.DocumentNode.SelectNodes("//*[contains(translate(@style, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', 'abcdefghijklmnopqrstuvwxyz'), 'expression')]"); if (nc != null) { foreach (HtmlNode node in nc) { node.Attributes.Remove("stYle"); } } return doc.DocumentNode.WriteTo(); } Edit 2 people have suggested whitelisting. I actually like the idea of whitelisting but never actually did it because no one can actually tell me how to do it in C# and I can't even really find tutorials for how to do it in c#(the last time I looked. I will check it out again). How do you make a white list? Is it just a list collection? How do you actual parse out all html tags, script tags and every other tag? Once you have the tags how do you determine which ones are allowed? Compare them to you list collection? But what happens if the content is coming in and has like 100 tags and you have 50 allowed. You got to compare each of those 100 tag by 50 allowed tags. Thats quite a bit to go through and could be slow. Once you found a invalid tag how do you remove it? I don't really want to reject a whole set of text if one tag was found to be invalid. I rather remove and insert the rest. Should I be using html agility pack?

    Read the article

  • Error when opening .tar.gz via Shell to install Apache Maven

    - by adamsquared
    Thank you in advance for the help. My Goal: To install apache maven per its websites instructions (http://maven.apache.org/download.html), in order to install the JUNG package according to its install instructions (http://sourceforge.net/apps/trac/jung/wiki/JUNGManual), so I can use the JUNG classes in various Java GUIs. The Problem: I get an error message when I try to extract the apache-maven .gz (install?) file in shell. Background: I'm trying to install the JUNG (http://jung.sourceforge.net/index.html) package to my system's Java, so I can write object-oriented code using various GUIs (Ecliplse, Dr. Java) using the classes in JUNG. I don't understand how the building/installing process works, and how I can get what I build/install to work on various GUIs and the command line. I'm new to shell and the command line, and mostly have experience using a simple IDE (DrJava, Python IDLE, R GUI) to write and compile object-oriented code. Machine: Mac OSX 10.5.8 32-bit. The Instructions: For the maven building Extract the distribution archive, i.e. apache-maven-3.0.4-bin.tar.gz to the directory you wish to install Maven 3.0.4. These instructions assume you chose /usr/local/apache-maven. The subdirectory apache-maven-3.0.4 will be created from the archive. ... for the JUNG installation Appendix: How to Build JUNG This is a brief intro to building JUNG jars with maven2 (the build system that JUNG currently uses). First, ensure that you have a JDK of at least version 1.5: JUNG 2.0+ requires Java 1.5+. Ensure that your JAVA_HOME variable is set to the location of the JDK. On a Windows platform, you may have a separate JRE (Java Runtime Environment) and JDK (Java Development Kit). The JRE has no capability to compile Java source files, so you must have a JDK installed. If your JAVA_HOME variable is set to the location of the JRE, and not the location of the JDK, you will be unable to compile. Get Maven Download and install maven2 from maven.apache.org: http://maven.apache.org/download.html At time of writing (early December 2009), the latest version was maven-2.2.1. Install the downloaded maven2 (there are installation instructions on the Maven website). Follow the installation instructions and confirm a successful installation by typing 'mvn --version' in a command terminal window. Get JUNG ... What I Did: I downloaded the file apache-maven-2.2.1-bin.tar.gz. The JUNG website specified to use apache maven 2. I wanted to stick to the recommended installation instructions, but I couldn't get to /usr on my GUI (i've noticed you click on the MacHD symbol on the desktop its missing several directories/folders that you can see using the shell using the ls command at root directory I couldn't find a way to access the file using my mac GUI. Therefore, I used the shell to navigate to the root directory and then to /usr/local, and used the mkdir command to make the directory apache-maven and entered it. I then moved the file using the mv command. Next I tried extracting the file using tar -zxvf apache-maven-2.2.1-bin.tar.gz. The Error Message: tar: apache-maven-2.2.1/direcoryandfile: Cannot open: No such file or directory ... apache-maven-2.2.1/lib/ext: Cannot mkdir: No such file or directory apache-maven-2.2.1/lib/ext/README.txt tar: apache-maven-2.2.1/lib/ext/README.txt: Cannot open: No such file or directory tar: Error exit delayed from previous errors From what I can tell the archive file is missing some directories or something. I tried deleting the file, redownloading the .tar.gz file from a different mirror and repeating the process. Same result. Thanks again for the help

    Read the article

  • MVC SiteMap - when different nodes point to same action SiteMap.CurrentNode does not map to the correct route

    - by awrigley
    Setup: I am using ASP.NET MVC 4, with mvcSiteMapProvider to manage my menus. I have a custom menu builder that evaluates whether a node is on the current branch (ie, if the SiteMap.CurrentNode is either the CurrentNode or the CurrentNode is nested under it). The code is included below, but essentially checks the url of each node and compares it with the url of the currentnode, up through the currentnodes "family tree". The CurrentBranch is used by my custom menu builder to add a class that highlights menu items on the CurrentBranch. The Problem: My custom menu works fine, but I have found that the mvcSiteMapProvider does not seem to evaluate the url of the CurrentNode in a consistent manner: When two nodes point to the same action and are distinguished only by a parameter of the action, SiteMap.CurrentNode does not seem to use the correct route (it ignores the distinguishing parameter and defaults to the first route that that maps to the action defined in the node). Example of the Problem: In an app I have Members. A Member has a MemberStatus field that can be "Unprocessed", "Active" or "Inactive". To change the MemberStatus, I have a ProcessMemberController in an Area called Admin. The processing is done using the Process action on the ProcessMemberController. My mvcSiteMap has two nodes that BOTH map to the Process action. The only difference between them is the alternate parameter (such are my client's domain semantics), that in one case has a value of "Processed" and in the other "Unprocessed": Nodes: <mvcSiteMapNode title="Process" area="Admin" controller="ProcessMembers" action="Process" alternate="Unprocessed" /> <mvcSiteMapNode title="Change Status" area="Admin" controller="ProcessMembers" action="Process" alternate="Processed" /> Routes: The corresponding routes to these two nodes are (again, the only thing that distinguishes them is the value of the alternate parameter): context.MapRoute( "Process_New_Members", "Admin/Unprocessed/Process/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Unprocessed", MemberId = UrlParameter.Optional } ); context.MapRoute( "Change_Status_Old_Members", "Admin/Members/Status/Change/{MemberId}", new { controller = "ProcessMembers", action = "Process", alternate="Processed", MemberId = UrlParameter.Optional } ); What works: The Html.ActionLink helper uses the routes and produces the urls I expect: @Html.ActionLink("Process", MVC.Admin.ProcessMembers.Process(item.MemberId, "Unprocessed") // Output (alternate="Unprocessed" and item.MemberId = 12): Admin/Unprocessed/Process/12 @Html.ActionLink("Status", MVC.Admin.ProcessMembers.Process(item.MemberId, "Processed") // Output (alternate="Processed" and item.MemberId = 23): Admin/Members/Status/Change/23 In both cases the output is correct and as I expect. What doesn't work: Let's say my request involves the second option, ie, /Admin/Members/Status/Change/47, corresponding to alternate = "Processed" and a MemberId of 47. Debugging my static CurrentBranch property (see below), I find that SiteMap.CurrentNode shows: PreviousSibling: null Provider: {MvcSiteMapProvider.DefaultSiteMapProvider} ReadOnly: false ResourceKey: "" Roles: Count = 0 RootNode: {Home} Title: "Process" Url: "/Admin/Unprocessed/Process/47" Ie, for a request url of /Admin/Members/Status/Change/47, SiteMap.CurrentNode.Url evaluates to /Admin/Unprocessed/Process/47. Ie, it is ignorning the alternate parameter and using the wrong route. CurrentBranch Static Property: /// <summary> /// ReadOnly. Gets the Branch of the Site Map that holds the SiteMap.CurrentNode /// </summary> public static List<SiteMapNode> CurrentBranch { get { List<SiteMapNode> currentBranch = null; if (currentBranch == null) { SiteMapNode cn = SiteMap.CurrentNode; SiteMapNode n = cn; List<SiteMapNode> ln = new List<SiteMapNode>(); if (cn != null) { while (n != null && n.Url != SiteMap.RootNode.Url) { // I don't need to check for n.ParentNode == null // because cn != null && n != SiteMap.RootNode ln.Add(n); n = n.ParentNode; } // the while loop excludes the root node, so add it here // I could add n, that should now be equal to SiteMap.RootNode, but this is clearer ln.Add(SiteMap.RootNode); // The nodes were added in reverse order, from the CurrentNode up, so reverse them. ln.Reverse(); } currentBranch = ln; } return currentBranch; } } The Question: What am I doing wrong? The routes are interpreted by Html.ActionLlink as I expect, but are not evaluated by SiteMap.CurrentNode as I expect. In other words, in evaluating my routes, SiteMap.CurrentNode ignores the distinguishing alternate parameter.

    Read the article

  • Better, simpler example of 'semantic conflict'?

    - by rhubbarb
    I like to distinguish three different types of conflict from a version control system (VCS): textual syntactic semantic A textual conflict is one that is detected by the merge or update process. This is flagged by the system. A commit of the result is not permitted by the VCS until the conflict is resolved. A syntactic conflict is not flagged by the VCS, but the result will not compile. Therefore this should also be picked up by even a slightly careful programmer. (A simple example might be a variable rename by Left and some added lines using that variable by Right. The merge will probably have an unresolved symbol. Alternatively, this might introduce a semantic conflict by variable hiding.) Finally, a semantic conflict is not flagged by the VCS, the result compiles, but the code may have problems running. In mild cases, incorrect results are produced. In severe cases, a crash could be introduced. Even these should be detected before commit by a very careful programmer, through either code review or unit testing. My example of a semantic conflict uses SVN (Subversion) and C++, but those choices are not really relevant to the essence of the question. The base code is: int i = 0; int odds = 0; while (i < 10) { if ((i & 1) != 0) { odds *= 10; odds += i; } // next ++ i; } assert (odds == 13579) The Left (L) and Right (R) changes are as follows. Left's 'optimisation' (changing the values the loop variable takes): int i = 1; // L int odds = 0; while (i < 10) { if ((i & 1) != 0) { odds *= 10; odds += i; } // next i += 2; // L } assert (odds == 13579) Right's 'optimisation' (changing how the loop variable is used): int i = 0; int odds = 0; while (i < 5) // R { odds *= 10; odds += 2 * i + 1; // R // next ++ i; } assert (odds == 13579) This is the result of a merge or update, and is not detected by SVN (which is correct behaviour for the VCS). int i = 1; // L int odds = 0; while (i < 5) // R { odds *= 10; odds += 2 * i + 1; // R // next i += 2; // L } assert (odds == 13579) The assert fails because odds is 37. So my question is as follows. Is there a simpler example than this? Is there a simple example where the compiled executable has a new crash? As a secondary question, are there cases of this that you have encountered in real code? Again, simple examples are especially welcome.

    Read the article

  • Boost::Spirit::Qi autorules -- avoiding repeated copying of AST data structures

    - by phooji
    I've been using Qi and Karma to do some processing on several small languages. Most of the grammars are pretty small (20-40 rules). I've been able to use autorules almost exclusively, so my parse trees consist entirely of variants, structs, and std::vectors. This setup works great for the common case: 1) parse something (Qi), 2) make minor manipulations to the parse tree (visitor), and 3) output something (Karma). However, I'm concerned about what will happen if I want to make complex structural changes to a syntax tree, like moving big subtrees around. Consider the following toy example: A grammar for s-expr-style logical expressions that uses autorules... // Inside grammar class; rule names match struct names... pexpr %= pand | por | var | bconst; pand %= lit("(and ") >> (pexpr % lit(" ")) >> ")"; por %= lit("(or ") >> (pexpr % lit(" ")) >> ")"; pnot %= lit("(not ") >> pexpr >> ")"; ... which leads to parse tree representation that looks like this... struct var { std::string name; }; struct bconst { bool val; }; struct pand; struct por; struct pnot; typedef boost::variant<bconst, var, boost::recursive_wrapper<pand>, boost::recursive_wrapper<por>, boost::recursive_wrapper<pnot> > pexpr; struct pand { std::vector<pexpr> operands; }; struct por { std::vector<pexpr> operands; }; struct pnot { pexpr victim; }; // Many Fusion Macros here Suppose I have a parse tree that looks something like this: pand / ... \ por por / \ / \ var var var var (The ellipsis means 'many more children of similar shape for pand.') Now, suppose that I want negate each of the por nodes, so that the end result is: pand / ... \ pnot pnot | | por por / \ / \ var var var var The direct approach would be, for each por subtree: - create pnot node (copies por in construction); - re-assign the appropriate vector slot in the pand node (copies pnot node and its por subtree). Alternatively, I could construct a separate vector, and then replace (swap) the pand vector wholesale, eliminating a second round of copying. All of this seems cumbersome compared to a pointer-based tree representation, which would allow for the pnot nodes to be inserted without any copying of existing nodes. My question: Is there a way to avoid copy-heavy tree manipulations with autorule-compliant data structures? Should I bite the bullet and just use non-autorules to build a pointer-based AST (e.g., http://boost-spirit.com/home/2010/03/11/s-expressions-and-variants/)?

    Read the article

  • Multi-threaded .NET application blocks during file I/O when protected by Themida

    - by Erik Jensen
    As the title says I have a .NET application that is the GUI which uses multiple threads to perform separate file I/O and notice that the threads occasionally block when the application is protected by Themida. One thread is devoted to reading from serial COM port and another thread is devoted to copying files. What I experience is occasionally when the file copy thread encounters a network delay, it will block the other thread that is reading from the serial port. In addition to slow network (which can be transient), I can cause the problem to happen more frequently by making a PathFileExists call to a bad path e.g. PathFileExists("\\\\BadPath\\file.txt"); The COM port reading function will block during the call to ReadFile. This only happens when the application is protected by Themida. I have tried under WinXP, Win7, and Server 2012. In a streamlined test project, if I replace the .NET application with a MFC unmanaged application and still utilize the same threads I see no issue even when protected with Themida. I have contacted Oreans support and here is their response: The way that a .NET application is protected is very different from a native application. To protect a .NET application, we need to hook most of the file access APIs in order to "cheat" the .NET Framework that the application is protected. I guess that those special hooks (on CreateFile, ReadFile...) are delaying a bit the execution in your application and the problem appears. We did a test making those hooks as light as possible (with minimum code on them) but the problem still appeared in your application. The rest of software protectors that we tried (like Enigma, Molebox...) also use a similar hooking approach as it's the only way to make the .NET packed file to work. If those hooks are not present, the .NET Framework will abort execution as it will see that the original file was tampered (due to all Microsoft checks on .NET files) Those hooks are not present in a native application, that's why it should be working fine on your native application. Oreans support tried other software protectors such as Enigma Protector, Engima VirtualBox, and Molebox and all exhibit the exact same problem. What I have found as a work around is to separate out the file copy logic (where the file exists call is being made) to be performed in a completely separate process. I have experimented with converting the thread functions from unmanaged C++ to VB.NET equivalents (PathFileExists - System.IO.File.Exists and CreateFile/ReadFile - System.IO.Ports.SerialPort.Open/Read) and still see the same serial port read blocked when the file check or copy call is delayed. I have also tried setting the ReadFile to work asynchronously but that had no effect. I believe I am dealing with some low-level windows layer that no matter the language it exhibits a block on a shared resource -- and only when the application is executing under a single .NET process protected by Themida which evidently installs some hooks to allow .NET execution. At this time converting the entire application away from .NET is not an option. Nor is separating out the file copy logic to a separate task. I am wondering if anyone else has more knowledge of how a file operation can block another thread reading from a system port. I have included here example applications that show the problem: https://db.tt/cNMYfEIg - VB.NET https://db.tt/Y2lnTqw7 - MFC They are Visual Studio 2010 solutions. When running the themida protected exe, you can see when the FileThread counter pauses (executing the File.Exists call) while the ReadThread counter also pauses. When running non-protected visual studio output exe, the ReadThread counter does not pause which is how we expect it to function. Thanks!

    Read the article

  • Game AI: Pattern for implementing Sense-Think-Act components?

    - by Rosarch
    I'm developing a game. Each entity in the game is a GameObject. Each GameObject is composed of a GameObjectController, GameObjectModel, and GameObjectView. (Or inheritants thereof.) For NPCs, the GameObjectController is split into: IThinkNPC: reads current state and makes a decision about what to do IActNPC: updates state based on what needs to be done ISenseNPC: reads current state to answer world queries (eg "am I being in the shadows?") My question: Is this ok for the ISenseNPC interface? public interface ISenseNPC { // ... /// <summary> /// True if `dest` is a safe point to which to retreat. /// </summary> /// <param name="dest"></param> /// <param name="angleToThreat"></param> /// <param name="range"></param> /// <returns></returns> bool IsSafeToRetreat(Vector2 dest, float angleToThreat, float range); /// <summary> /// Finds a new location to which to retreat. /// </summary> /// <param name="angleToThreat"></param> /// <returns></returns> Vector2 newRetreatDest(float angleToThreat); /// <summary> /// Returns the closest LightSource that illuminates the NPC. /// Null if the NPC is not illuminated. /// </summary> /// <returns></returns> ILightSource ClosestIlluminatingLight(); /// <summary> /// True if the NPC is sufficiently far away from target. /// Assumes that target is the only entity it could ever run from. /// </summary> /// <returns></returns> bool IsSafeFromTarget(); } None of the methods take any parameters. Instead, the implementation is expected to maintain a reference to the relevant GameObjectController and read that. However, I'm now trying to write unit tests for this. Obviously, it's necessary to use mocking, since I can't pass arguments directly. The way I'm doing it feels really brittle - what if another implementation comes along that uses the world query utilities in a different way? Really, I'm not testing the interface, I'm testing the implementation. Poor. The reason I used this pattern in the first place was to keep IThinkNPC implementation code clean: public BehaviorState RetreatTransition(BehaviorState currentBehavior) { if (sense.IsCollidingWithTarget()) { NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "is colliding with target"); return BehaviorState.ATTACK; } if (sense.IsSafeFromTarget() && sense.ClosestIlluminatingLight() == null) { return BehaviorState.WANDER; } if (sense.ClosestIlluminatingLight() != null && sense.SeesTarget()) { NPCUtils.TraceTransitionIfNeeded(ToString(), BehaviorState.ATTACK.ToString(), "collides with target"); return BehaviorState.CHASE; } return currentBehavior; } Perhaps the cleanliness isn't worth it, however. So, if ISenseNPC takes all the params it needs every time, I could make it static. Is there any problem with that?

    Read the article

  • ScriptManager emits ScriptReferences after ClientScriptIncludes. Why?

    - by Chris F
    I have an application that uses a lot of javascript in a lot of different .js files. Each page can have any number and combination of different controls and each control can use a different set of js files. Instead of including all possible js files as <script src="xxx.js"> in the head of my master page, I thought I would reduce the amount of included files by getting each control to call ScriptManager.Scripts.Add(new ScriptReference("xxx.js")) - thus only the scripts that are actually needed by the page will be included. Well that bit works fine. But...the controls also use ScriptManager.RegisterClientScriptBlock fairly extensively too. (There are scenarios where the controls are inside update panels). I was disappointed to see that the ScriptManager emits the client script includes before the ScriptReferences. For example - see the test file and output at the end (this results in a JS error because $ is not defined). Am I being dumb or is this to be expected? I would have thought that a sensible thing for the ScriptManger to do would be to emit the ScriptReferences first and then the other stuff. Short of rolling my own ScriptManager like object to manage static JS file references, does anyone have any suggestions as to get the behaviour that I want from ScriptManager?? Thanks in advance. Example File <%@ Page Language="C#" AutoEventWireup="true" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { ScriptManager.RegisterClientScriptBlock(Page, Page.GetType(), "Test", "$(function() {alert('hello');});", true); } </script> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager runat="server" > <Scripts> <asp:ScriptReference Path="~/jquery/jquery.js" /> </Scripts> </asp:ScriptManager> </form> </body> </html> Output <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > ... boring stuff removed .... <script src="/js/WebResource.axd?d=-nOHbla bla " type="text/javascript"></script> <script type="text/javascript"> //<![CDATA[ $(function() {alert('hello');});//]]> </script> ... boring stuff removed .... <script src="/js/ScriptResource.axd?d=qgCbla bla bla" type="text/javascript"></script> <script src="jquery/jquery.js" type="text/javascript"></script> ... boring stuff removed .... </form> </body> </html>

    Read the article

  • Entity Framework Generic Repository Error

    - by Jeff Ancel
    I am trying to create a very generic generics repository for my Entity Framework repository that has the basic CRUD statements and uses an Interface. I have hit a brick wall head first and been knocked over. Here is my code, written in a console application, using a Entity Framework Model, with a table named Hurl. Simply trying to pull back the object by its ID. Here is the full application code. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data.Objects; using System.Linq.Expressions; using System.Reflection; using System.Data.Objects.DataClasses; namespace GenericsPlay { class Program { static void Main(string[] args) { var hs = new HurlRepository(new hurladminEntity()); var hurl = hs.Load<Hurl>(h => h.Id == 1); Console.Write(hurl.ShortUrl); Console.ReadLine(); } } public interface IHurlRepository { T Load<T>(Expression<Func<T, bool>> expression); } public class HurlRepository : IHurlRepository, IDisposable { private ObjectContext _objectContext; public HurlRepository(ObjectContext objectContext) { _objectContext = objectContext; } public ObjectContext ObjectContext { get { return _objectContext; } } private Type GetBaseType(Type type) { Type baseType = type.BaseType; if (baseType != null && baseType != typeof(EntityObject)) { return GetBaseType(type.BaseType); } return type; } private bool HasBaseType(Type type, out Type baseType) { Type originalType = type.GetType(); baseType = GetBaseType(type); return baseType != originalType; } public IQueryable<T> GetQuery<T>() { Type baseType; if (HasBaseType(typeof(T), out baseType)) { return this.ObjectContext.CreateQuery<T>("[" + baseType.Name.ToString() + "]").OfType<T>(); } else { return this.ObjectContext.CreateQuery<T>("[" + typeof(T).Name.ToString() + "]"); } } public T Load<T>(Expression<Func<T, bool>> whereCondition) { return this.GetQuery<T>().Where(whereCondition).First(); } public void Dispose() { if (_objectContext != null) { _objectContext.Dispose(); } } } } Here is the error that I am getting: System.Data.EntitySqlException was unhandled Message="'Hurl' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near escaped identifier, line 3, column 1." Source="System.Data.Entity" Column=1 ErrorContext="escaped identifier" ErrorDescription="'Hurl' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly." This is where I am attempting to extract this information from. http://blog.keithpatton.com/2008/05/29/Polymorphic+Repository+For+ADONet+Entity+Framework.aspx

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • What does the `forall` keyword in Haskell/GHC do?

    - by JUST MY correct OPINION
    I've been banging my head on this one for (quite literally) years now. I'm beginning to kinda/sorta understand how the foreach keyword is used in so-called "existential types" like this: data ShowBox = forall s. Show s => SB s (This despite the confusingly-worded explanations of it in the fragments found all around the web.) This is only a subset, however, of how foreach is used and I simply cannot wrap my mind around its use in things like this: runST :: forall a. (forall s. ST s a) -> a Or explaining why these are different: foo :: (forall a. a -> a) -> (Char,Bool) bar :: forall a. ((a -> a) -> (Char, Bool)) Or the whole RankNTypes stuff that breaks my brain when "explained" in a way that makes me want to do that Samuel L. Jackson thing from Pulp Fiction. (Don't follow that link if you're easily offended by strong language.) The problem, really, is that I'm a dullard. I can't fathom the chicken scratchings (some call them "formulae") of the elite mathematicians that created this language seeing as my university years are over two decades behind me and I never actually had to put what I learnt into use in practice. I also tend to prefer clear, jargon-free English rather than the kinds of language which are normal in academic environments. Most of the explanations I attempt to read on this (the ones I can find through search engines) have these problems: They're incomplete. They explain one part of the use of this keyword (like "existential types") which makes me feel happy until I read code that uses it in a completely different way (like runST, foo and bar above). They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.) They're written in ways that frequently turn even simple concepts into tortuously twisted and fractured grammar and semantics. (I suspect that the last two items are the biggest problem. I wouldn't know, though, since I'm too much a dullard to comprehend them.) It's been asked why Haskell never really caught on in industry. I suspect, in my own humble, unintelligent way, that my experience in figuring out one stupid little keyword -- a keyword that is increasingly ubiquitous in the libraries being written these days -- are also part of the answer to that question. It's hard for a language to catch on when even its individual keywords cause years-long quests to comprehend. Years-long quests which end in failure. So... On to the actual question. Can anybody completely explain the foreach keyword in clear, plain English (or, if it exists somewhere, point to such a clear explanation which I've missed) that doesn't assume I'm a mathematician steeped in the jargon?

    Read the article

  • Opinions on Dual-Salt authentication for low sensitivity user accounts?

    - by Heleon
    EDIT - Might be useful for someone in the future... Looking around the bcrypt class in php a little more, I think I understand what's going on, and why bcrypt is secure. In essence, I create a random blowfish salt, which contains the number of crypt rounds to perform during the encryption step, which is then hashed using the crypt() function in php. There is no need for me to store the salt I used in the database, because it's not directly needed to decrypt, and the only way to gain a password match to an email address (without knowing the salt values or number of rounds) would be to brute force plain text passwords against the hash stored in the database using the crypt() function to verify, which, if you've got a strong password, would just be more effort than it's worth for the user information i'm storing... I am currently working on a web project requiring user accounts. The application is CodeIgniter on the server side, so I am using Ion Auth as the authentication library. I have written an authentication system before, where I used 2 salts to secure the passwords. One was a server-wide salt which sat as an environment variable in the .htaccess file, and the other was a randomly generated salt which was created at user signup. This was the method I used in that authentication system for hashing the password: $chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; //create a random string to be used as the random salt for the password hash $size = strlen($chars); for($i = 0; $i < 22; $i++) { $str .= $chars[rand(0, $size - 1)]; } //create the random salt to be used for the crypt $r_blowfish_salt = "$2a$12$" . $str . "$"; //grab the website salt $salt = getenv('WEBSITE_SALT'); //combine the website salt, and the password $password_to_hash = $pwd . $salt; //crypt the password string using blowfish $password = crypt($password_to_hash, $r_blowfish_salt); I have no idea whether this has holes in it or not, but regardless, I moved over to Ion Auth for a more complete set of functions to use with CI. I noticed that Ion only uses a single salt as part of its hashing mechanism (although does recommend that encryption_key is set in order to secure the database session.) The information that will be stored in my database is things like name, email address, location by country, some notes (which will be recommended that they do not contain sensitive information), and a link to a Facebook, Twitter or Flickr account. Based on this, i'm not convinced it's necessary for me to have an SSL connection on the secure pages of my site. My question is, is there a particular reason why only 1 salt is being used as part as the Ion Auth library? Is it implied that I write my own additional salting in front of the functionality it provides, or am I missing something? Furthermore, is it even worth using 2 salts, or once an attacker has the random salt and the hashed password, are all bets off anyway? (I assume not, but worth checking if i'm worrying about nothing...)

    Read the article

  • Add objects to Arraylist inside loop and get a list of them outside loops

    - by AgusDG
    Im already done with a method to do a shot on a board (bidimensional array). THe shot goes from the bottom to the top, and depending of the direction, it do bounces on the walls to get to the top. The thing is that I did the method to represent the trayectory with an 'x'. Now, I want to add the coordinates x and y of each position of the shot (b [x][y]) to and Arraylist of Objects Position. public Position(int row,int col) { this.row = row; this.col = col; } The thing is that the method uses a for loop and inside if loops, and I'll need to create the objects inside, and get them outside. I did that : public static ArrayList<Position> showTrayectory (char [][] b , int shotDirection, char bubble){ int row = 0, col = 0; ArrayList<Position> aListPos = new ArrayList<Position>(); Position positionsOfShot = new Position(row,col); START = ((RIGHT_WALL)/2) + shotDirection; boolean shotRight = false; if(shotDirection < 0) shotRight = false; else if(shotDirection > 0) shotRight = true; for(int y = BOTTOM,x = START ;y >= 0;y--) { if(!isOut(y,x) && !emptyCell(y,x)) break; if(x <= LEFT_WALL) shotRight = true; if(x >= RIGHT_WALL) shotRight = false; if(!isOut(y,x) && shotRight == true) { positionsOfShot = new Position(y,x); aListPos.add(positionsOfShot); b[y][x] = SHOT; ++x; } if(!isOut(y,x) && shotRight == false){ positionsOfShot = new Position(y,x); aListPos.add(positionsOfShot); b[y][x] = SHOT; --x; } } // The nested for loops below are for showing the positions // But I dont need it that way // I must get the trayectory from an ArrayList and print it from there for(int y=0;y < b.length;y++){ System.out.println(); for(int x=0;x < b[y].length;x++){ System.out.print(" "+b [y][x]+" "); } } System.out.println("\nTrayectory of the shot ["+shotDirection+"]"); System.out.println("Next bubble ["+bubble+"]"); for( Position ii : aListPos){ System.out.println("(" + positionsOfShot.getFila() + "," + positionsOfShot.getColumna()+")"); } return aListPos; } The sentence " b[y][x] = SHOT; " is still there, to see the proper trayectory of the shot (its not needed that way), but what I need, is getting the trayectory in an ArrayList, and print the trayectory from there. All that I get is a wrong position, and repeated during the number of positions the shot goes through. I need some help. I suppose the problem is that Im creating and adding Position Objects inside an ArrayList inside loops, but in a wrong way. I will need you to explain me how to do it properly ; ) Thanks in advance. I'll add the output for you see better what is that above haha *************************** y b y b g r b g o y g a a r y o y y r b y g r r o b o y y g b a r y r o a y y o o r r g r - - - x - - - - - - - - - x - - - - - - - - - x - - - - - - - - - x - - - - - - - - - x - - - - - - - - - x - - - - - - - x - - - - - - - x - - - - - - - x - - - Trayectory of the shot [1] Next bubble [y] (5,3) (5,3) (5,3) (5,3) (5,3) (5,3) (5,3) (5,3) (5,3) Action?

    Read the article

  • Why do some Flask session values disappear from the session after closing the browser window, but then reappear later without me adding them?

    - by Ben
    So my understanding of Flask sessions is that I can use it like a dictionary and add values to a session by doing: session['key name'] = 'some value here' And that works fine. On a route I have the client call using AJAX post, I assign a value to the session. And it works fine. I can click on various pages of my site and the value stays in the session. If I close the browser window however, and then go back to my site, the session value I had in there is gone. So that's weird and you would think the problem is the session isn't permanent. I also implemented Flask-Openid and that uses the session to store information and that does persist if I close the browser window and open it back up again. I also checked the cookie after closing the browser window, but before going back to my site, and the cookie is indeed still there. Another odd piece of behaviour (which may be related) is that some values I have written to the session for testing purposes will go away when I access the AJAX post route and assign the correct value. So that is odd, but what is truly weird is that when I then close the browser window and open it up again, and have thus lost the value I was trying to retain, the ones that I lost previously actually return! They aren't being reassigned because there's no code in my Python files to reassign those values. Here is some outputs to helper make it clearer. They are all outputed from a route for a real page, and not the AJAX post route I mentioned above. This is the output after I have assigned the value I want to store in the session. The value key is 'userid' - all the other values are dummy ones I have added in trying to solve this problem. 'userid': 8 will stay in the session as long as I don't close the browser window. I can access other routes and the value will stay there just like it should. ['session.=', <SecureCookieSession {'userid': 8, 'test_variable_num': 102, 'adding using before request': 'hi', '_permanent': True, 'test_variable_text': 'hi!'}>] If I do close the browser window, and go back into the site, but without redoing the AJAX post request, I get this output: ['session.=', <SecureCookieSession {'adding using before request': 'hi', '_permanent': True, 'yo': 'yo'}>] The 'yo' value was not in the first first output. I don't know where it came from. I searched my code for 'yo' and there is no instances of me assigning that value anywhere. I think I may have added it to the session days ago. So it seems like it is persisting, but being hidden when the other values are written. And this last one is me accessing the AJAX post route again, and then going to the page that prints out the keys using debug. Same output as the first output I pasted above, which you would expect, and the 'yo' value is gone again (but it will come back if I close the browser window) ['session.=', <SecureCookieSession {'userid': 8, 'test_variable_num': 102, 'adding using before request': 'hi', '_permanent': True, 'test_variable_text': 'hi!'}>] I tested this in both Chrome and Firefox. So I find this all weird and I am guessing it stems from a misunderstanding of how sessions work. I think they're dictionaries and I can write dictionary values into them and retrieve them days later as long as I set the session to permanent and the cookie doesn't get deleted. Any ideas why this weird behaviour is happening?

    Read the article

  • RackSpace Cloud Strips $_SESSION if URL Has Certain File Extensions

    - by macinjosh
    The Situation I am creating a video training site for a client on the RackSpace Cloud using the traditional LAMP stack (RackSpace's cloud has both Windows and LAMP stacks). The videos and other media files I'm serving on this site need to be protected as my client charges money for access to them. There is no DRM or funny business like that, essentially we store the files outside of the web root and use PHP to authenticate user's before they are able to access the files by using mod_rewrite to run the request through PHP. So let's say the user requests a file at this URL: http://www.example.com/uploads/preview_image/29.jpg I am using mod_rewrite to rewrite that url to: http://www.example.com/files.php?path=%2Fuploads%2Fpreview_image%2F29.jpg Here is a simplified version of the files.php script: <?php // Setups the environment and sets $logged_in // This part requires $_SESSION require_once('../../includes/user_config.php'); if (!$logged_in) { // Redirect non-authenticated users header('Location: login.php'); } // This user is authenticated, continue $content_type = "image/jpeg"; // getAbsolutePathForRequestedResource() takes // a Query Parameter called path and uses DB // lookups and some string manipulation to get // an absolute path. This part doesn't have // any bearing on the problem at hand $file_path = getAbsolutePathForRequestedResource($_GET['path']); // At this point $file_path looks something like // this: "/path/to/a/place/outside/the/webroot" if (file_exists($file_path) && !is_dir($file_path)) { header("Content-Type: $content_type"); header('Content-Length: ' . filesize($file_path)); echo file_get_contents($file_path); } else { header('HTTP/1.0 404 Not Found'); header('Status: 404 Not Found'); echo '404 Not Found'; } exit(); ?> The Problem Let me start by saying this works perfectly for me. On local test machines it works like a charm. However once deployed to the cloud it stops working. After some debugging it turns out that if a request to the cloud has certain file extensions like .JPG, .PNG, or .SWF (i.e. extensions of typically static media files.) the request is routed to a cache system called Varnish. The end result of this routing is that by the time this whole process makes it to my PHP script the session is not present. If I change the extension in the URL to .PHP or if I even add a query parameter Varnish is bypassed and the PHP script can get the session. No problem right? I'll just add a meaningless query parameter to my requests! Here is the rub: The media files I am serving through this system are being requested through compiled SWF files that I have zero control over. They are generated by third-party software and I have no hope of adding or changing the URLs that they request. Are there any other options I have on this? Update: I should note that I have verified this behavior with RackSpace support and they have said there is nothing they can do about it.

    Read the article

< Previous Page | 592 593 594 595 596 597 598 599 600 601 602 603  | Next Page >