Search Results

Search found 21885 results on 876 pages for 'radix point'.

Page 173/876 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • PHP script stops running arbitrarily with no errors.

    - by RobHardgood
    I was working on a fairly long script that seemed to stop running after about 20 minutes. After days of trying to figure out why, I decided to make a very simple script to see how long it would run without any complex code to confuse me. I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it simply stopped. There is no error output at all. The bottom of the browser simply says "Done" and nothing else is processed. I've been over every single possible thing I could think of... set_time_limit, session.gc_maxlifetime in the php.ini as well as memory_limit and max_execution_time. The point that the script is stopped is never consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes, sometimes 17... But it's always long enough to be a PITA to test. Please, any help would be greatly appreciated. It is hosted on a 1and1 server. I contacted them and they couldn't help me... but I have a feeling they just didn't know enough.

    Read the article

  • Flushing writes in buffer of Memory Controller to DDR device

    - by Rohit
    At some point in my code, I need to push the writes in my code all the way to the DIMM or DDR device. My requirement is to ensure the write reaches the row,ban,column of the DDR device on the DIMM. I need to read what I've written to the main memory. I do not want caching to get me the value. Instead after writing I want to fetch this value from main memory(DIMM's). So far I've been using Intel's x86 instruction wbinvd(write back and invalidate cache). However this means the caches and TLB are flushed. Write-back requests go to the main memory. However, there is a reasonable amount of time this data might reside in the write buffer of the Memory Controller( Intel calls it integrated memory controller or IMC). The Memory Controller might take some more time depending on the algorithm that runs in the Memory Controller to handle writes. Is there a way I force all existing or pending writes in the write buffer of the memory controller to the DRAM devices ?? What I am looking for is something more direct and more low-level than wbinvd. If you could point me to right documents or specs that describe this I would be grateful. Generally, the IMC has a several registers which can be written or read from. From looking at the specs for that for the chipset I could not find anything useful. Thanks for taking the time to read this.

    Read the article

  • Potential problem with C standard malloc'ing chars.

    - by paxdiablo
    When answering a comment to another answer of mine here, I found what I think may be a hole in the C standard (c1x, I haven't checked the earlier ones and yes, I know it's incredibly unlikely that I alone among all the planet's inhabitants have found a bug in the standard). Information follows: Section 6.5.3.4 ("The sizeof operator") para 2 states "The sizeof operator yields the size (in bytes) of its operand". Para 3 of that section states: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1". Section 7.20.3.3 describes void *malloc(size_t sz) but all it says is "The malloc function allocates space for an object whose size is specified by size and whose value is indeterminate". It makes no mention at all what units are used for the argument. Annex E startes the 8 is the minimum value for CHAR_BIT so chars can be more than one byte in length. My question is simply this: In an environment where a char is 16 bits wide, will malloc(10 * sizeof(char)) allocate 10 chars (20 bytes) or 10 bytes? Point 1 above seems to indicate the former, point 2 indicates the latter. Anyone with more C-standard-fu than me have an answer for this?

    Read the article

  • Adobe flex dateField

    - by JonoB
    I have some code as follows: private function onComboChange(evt:Event):void { var temp:Date = df_date.selectedDate; temp.date += 5; df_dateDue.selectedDate = new Date(temp); } In essence, I am trying to add 5 days onto the selected date in df_date, and put that date into df_dateDue. This fires off via an EventListener on a combobox. Both df_date and df_dateDue are dateFields. OK, so the first time that I run this, it works fine; df_date stays the same and df_dateDue is set to 5 days past df_date. However, the next time that I run it, df_dateDue increments by 10 days from df_date, the next time by 15, and so on. So, stepping through the code shows that somehow df_date has become linked to the temp var, and that the temp var is not resetting itself each time the function is called. Example: df_date = 01 Jan, df_dateDue = 01 Jan. Fire off the event, df_date = 01 Jan, df_dateDue = 06 Jan Fire off the event again. At this point, var temp = 06 Jan (even though df_date still shows 01 Jan), and df_dateDue is then set to 11 Jan Fire off the event again. At this point var temp = 11 Jan (even though df_date = 01 Jan), and df_dateDue is then set to 16 Jan What am I missing here?

    Read the article

  • Glitch when moving camera in OpenGL

    - by CG
    I am writing a tile-based game engine for the iPhone and it works in general apart from the following glitch. Basically, the camera will always keep the player in the centre of the screen, and it moves to follow the player correctly and draws everything correctly when stationary. However whilst the player is moving, the tiles of the surface the player is walking on glitch as shown: Compared to the stationary (correct): Does anyone have any idea why this could be? Thanks for the responses so far. Floating point error was my first thought also and I tried slightly increasing the size of the tiles but this did not help. Changing glClearColor to red still leaves black gaps so maybe it isn't floating point error. Since the tiles in general will use different textures, I don't know if vertex arrays can be used (I always thought that the same texture had to be applied to everything in the array, correct me if I'm wrong), and I don't think VBO is available in OpenGL ES. Setting the filtering to nearest neighbour improved things but the glitch still happens every ten frames or so, and the pixelly result means that this solution is not viable anyway. The main difference between what I'm doing now and what I've done in the past is that this time I am moving the camera rather than the stationary objects in the world (i.e. the tiles, the player is still being moved). The code I'm using to move the camera is: void Camera::CentreAtPoint( GLfloat x, GLfloat y ) { glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(x - size.x / 2.0f, x + size.x / 2.0f, y + size.y / 2.0f, y - size.y / 2.0f, 0.01f, 5.0f); glMatrixMode(GL_MODELVIEW); } Is there a problem with doing things this way and if so is there a solution?

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Life Scope of Temporary Variable

    - by Yan Cheng CHEOK
    #include <cstdio> #include <string> void fun(const char* c) { printf("--> %s\n", c); } std::string get() { std::string str = "Hello World"; return str; } int main() { const char *cc = get().c_str(); // cc is not valid at this point. As it is pointing to // temporary string internal buffer, and the temporary string // has already been destroyed at this point. fun(cc); // But I am surprise this call will yield valid result. // It seems that the returned temporary string is valid within // scope (...) // What my understanding is, scope means {...} // Is this valid behavior guarantee by C++ standard? Or it depends // on your compiler vendor implementations? fun(get().c_str()); getchar(); } The output is : --> --> Hello World Hello, may I know the correct behavior is guarantee by C++ standard, or it depends on your compiler vendor implementations? I have tested this under VC2008 and VC6. Works fine for both.

    Read the article

  • Spring Security ACL: NotFoundException from JDBCMutableAclService.createAcl

    - by user340202
    Hello, I've been working on this task for too long to abandon the idea of using Spring Security to achieve it, but I wish that the community will provide with some support that will help reduce the regret that I have for choosing Spring Security. Enough ranting and now let's get to the point. I'm trying to create an ACL by using JDBCMutableAclService.createAcl as follows: [code] public void addPermission(IWFArtifact securedObject, Sid recipient, Permission permission, Class clazz) { ObjectIdentity oid = new ObjectIdentityImpl(clazz.getCanonicalName(), securedObject.getId()); this.addPermission(oid, recipient, permission); } @Override @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_UNCOMMITTED, readOnly = false) public void addPermission(ObjectIdentity oid, Sid recipient, Permission permission) { SpringSecurityUtils.assureThreadLocalAuthSet(); MutableAcl acl; try { acl = this.mutableAclService.createAcl(oid); } catch (AlreadyExistsException e) { acl = (MutableAcl) this.mutableAclService.readAclById(oid); } // try { // acl = (MutableAcl) this.mutableAclService.readAclById(oid); // } catch (NotFoundException nfe) { // acl = this.mutableAclService.createAcl(oid); // } acl.insertAce(acl.getEntries().length, permission, recipient, true); this.mutableAclService.updateAcl(acl); } [/code] The call throws a NotFoundException from the line: [code] // Retrieve the ACL via superclass (ensures cache registration, proper retrieval etc) Acl acl = readAclById(objectIdentity); [/code] I believe this is caused by something related to Transactional, and that's why I have tested with many TransactionDefinition attributes. I have also doubted the annotation and tried with declarative transaction definition, but still with no luck. One important point is that I have used the statement used to insert the oid in the database earlier in the method directly on the database and it worked, and also threw a unique constraint exception at me when it tried to insert it in the method. I'm using Spring Security 2.0.8 and IceFaces 1.8 (which doesn't support spring 3.0 but definetely supprorts 2.0.x, specially when I keep caling SpringSecurityUtils.assureThreadLocalAuthSet()). My AppServer is Tomcat 6.0, and my DB Server is MySQL 6.0 I wish to get back a reply soon because I need to get this task off my way

    Read the article

  • Creation of Objects: Constructors or Static Factory Methods

    - by Rachel
    I am going through Effective Java and some of my things which I consider as standard are not suggested by the book, for instance creation of object, I was under the impression that constructors are the best way of doing it and books says we should make use of static factory methods, I am not able to few some advantages and so disadvantages and so am asking this question, here are the benefits of using it. Advantages: One advantage of static factory methods is that, unlike constructors, they have names. A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they’re invoked. A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. A fourth advantage of static factory methods is that they reduce the verbosity of creating parameterized type instances. I am not able to understand this advantage and would appreciate if someone can explain this point Disadvantages: The main disadvantage of providing only static factory methods is that classes without public or protected constructors cannot be subclassed. A second disadvantage of static factory methods is that they are not readily distinguishable from other static methods.I am not getting this point and so would really appreciate some explanation. Reference: Effective Java, Joshua Bloch, Edition 2, pg: 5-10 Also, How to decide to use whether to go for Constructor or Static Factory Method for Object Creation ?

    Read the article

  • JSON parse data in javascript from php

    - by Stefania
    I'm trying to retrieve data in a javascript file from a php file using json. $items = array(); while($r = mysql_fetch_array($result)) { $rows = array( "id_locale" => $r['id_locale'], "latitudine" => $r['lat'], "longitudine" => $r['lng'] ); array_push($items, array("item" => $rows)); } ECHO json_encode($items); and in the javascript file I try to recover the data using an ajax call: $.ajax({ type:"POST", url:"Locali.php", success:function(data){ alert("1"); //var obj = jQuery.parseJSON(idata); var json = JSON.parse(data); alert("2"); for (var i=0; i<json.length; i++) { point = new google.maps.LatLng(json[i].item.latitudine,json[i].item.longitudine); alert(point); } } }) The first alert is printed, the latter does not, it gives me error: Unexpected token <.... but I do not understand what it is. Anyone have any idea where am I wrong? I also tried to recover the data with jquery but with no positive results.

    Read the article

  • caching previous return values of procedures in scheme

    - by Brock
    In Chapter 16 of "The Seasoned Schemer", the authors define a recursive procedure "depth", which returns 'pizza nested in n lists, e.g (depth 3) is (((pizza))). They then improve it as "depthM", which caches its return values using set! in the lists Ns and Rs, which together form a lookup-table, so you don't have to recurse all the way down if you reach a return value you've seen before. E.g. If I've already computed (depthM 8), when I later compute (depthM 9), I just lookup the return value of (depthM 8) and cons it onto null, instead of recursing all the way down to (depthM 0). But then they move the Ns and Rs inside the procedure, and initialize them to null with "let". Why doesn't this completely defeat the point of caching the return values? From a bit of experimentation, it appears that the Ns and Rs are being reinitialized on every call to "depthM". Am I misunderstanding their point? I guess my question is really this: Is there a way in Scheme to have lexically-scoped variables preserve their values in between calls to a procedure, like you can do in Perl 5.10 with "state" variables?

    Read the article

  • Providing custom database functionality to custom asp.net membership provider

    - by IrfanRaza
    Hello friends, I am creating custom membership provider for my asp.net application. I have also created a separate class "DBConnect" that provides database functionality such as Executing SQL statement, Executing SPs, Executing SPs or Query and returning SqlDataReader and so on... I have created instance of DBConnect class within Session_Start of Global.asax and stored to a session. Later using a static class I am providing the database functionality throughout the application using the same single session. In short I am providing a single point for all database operations from any asp.net page. I know that i can write my own code to connect/disconnect database and execute SPs within from the methods i need to override. Please look at the code below - public class SGI_MembershipProvider : MembershipProvider { ...... public override bool ChangePassword(string username, string oldPassword, string newPassword) { if (!ValidateUser(username, oldPassword)) return false; ValidatePasswordEventArgs args = new ValidatePasswordEventArgs(username, newPassword, true); OnValidatingPassword(args); if (args.Cancel) { if (args.FailureInformation != null) { throw args.FailureInformation; } else { throw new Exception("Change password canceled due to new password validation failure."); } } ..... //Database connectivity and code execution to change password. } .... } MY PROBLEM - Now what i need is to execute the database part within all these overriden methods from the same database point as described on the top. That is i have to pass the instance of DBConnect existing in the session to this class, so that i can access the methods. Could anyone provide solution on this. There might be some better techniques i am not aware of that. The approach i am using might be wrong. Your suggessions are always welcome. Thanks for sharing your valuable time.

    Read the article

  • How to globalize ASP.NET MVC views (decimal separators in particular)?

    - by Pawel Krakowiak
    I'm working with the NerdDinner sample application and arrived at the section which deals with the Virtual Earth map. The application stores some values for the longitude and latitude. Unfortunately on my system floating point numbers are stored with a comma as the decimal separator, not a dot like in the US. So if I have a latitude of 47.64 it's retrieved and displayed as 47,64. Because that value is passed in a function call to the Virtual Earth API it fails at that point (e.g. JavaScript API expects 47.64, -122.13, but gets 47,64, -122,13). I need to make sure that the application always uses dots. In a WebForms app I would have a common class which overrides the System.Web.UI.Page.InitializeCulture() method and I would be inheriting my pages from that class. I am not sure about how to do the same with MVC. Do I need a customized ViewPage or something? Is there an easy way to solve this? Examples?

    Read the article

  • Sizing issues while adding a .Net UserControl to a TabPage

    - by TJ_Fischer
    I have a complex Windows Forms GUI program that has a lot of automated control generation and manipulation. One thing that I need to be able to do is add a custom UserControl to a newly instatiated TabPage. However, when my code does this I get automatic resizing events that cause the formatting to get ugly. Without detailing all of the different Containers that could possibly be involved, the basic issue is this: At a certain point in the code I create a new tab page: TabPage tempTabPage = new TabPage("A New Tab Page"); Then I set it to a certain size that I want it to maintain: tempTabPage.Width = 1008; tempTabPage.Height = 621; Then I add it to a TabControl: tabControl.TabPages.Add(tempTabPage); Then I create a user control that I want to appear in the newly added TabPage: CustomView customView = new CustomView("A new custom control"); Here is where the problem comes in. At this point both the tempTabPage and the customView are the same size with no padding or margin and they are the size I want them to be. I now try to add this new custom UserControl to the tab page like this: tempTabPage.Controls.Add(customView); When making this call the customView and it's children controls get resized to be larger and so parts of the customView are hidden. Can anyone give me any direction on what to look for or what could be causing this kind of issue? Thanks ahead of time.

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Python: Does one of these examples waste more memory?

    - by orokusaki
    In a Django view function which uses manual transaction committing, I have: context = RequestContext(request, data) transaction.commit() return render_to_response('basic.html', data, context) # Returns a Django ``HttpResponse`` object which is similar to a dictionary. I think it is a better idea to do this: context = RequestContext(request, data) response = render_to_response('basic.html', data, context) transaction.commit() return response If the page isn't rendered correctly in the second version, the transaction is rolled back. This seems like the logical way of doing it albeit there won't likely be many exceptions at that point in the function when the application is in production. But... I fear that this might cost more and this will be replete through a number of functions since the application is heavy with custom transaction handling, so now is the time to figure out. If the HttpResponse instance is in memory already (at the point of render_to_response()), then what does another reference cost? When the function ends, doesn't the reference (response variable) go away so that when Django is done converting the HttpResponse into a string for output Python can immediately garbage collect it? Is there any reason I would want to use the first version (other than "It's 1 less line of code.")?

    Read the article

  • Move camera to fit 3D scene

    - by Burre
    Hi there. I'm looking for an algorithm to fit a bounding box inside a viewport (in my case a DirectX scene). I know about algorithms for centering a bounding sphere in a orthographic camera but would need the same for a bounding box and a perspective camera. I have most of the data: I have the up-vector for the camera I have the center point of the bounding box I have the look-at vector (direction and distance) from the camera point to the box center I have projected the points on a plane perpendicular to the camera and retrieved the coefficients describing how much the max/min X and Y coords are within or outside the viewing plane. Problems I have: Center of the bounding box isn't necessarily in the center of the viewport (that is, it's bounding rectangle after projection). Since the field of view "skew" the projection (see http://en.wikipedia.org/wiki/File:Perspective-foreshortening.svg) I cannot simply use the coefficients as a scale factor to move the camera because it will overshoot/undershoot the desired camera position How do I find the camera position so that it fills the viewport as pixel perfect as possible (exception being if the aspect ratio is far from 1.0, it only needs to fill one of the screen axis)? I've tried some other things: Using a bounding sphere and Tangent to find a scale factor to move the camera. This doesn't work well, because, it doesn't take into account the perspective projection, and secondly spheres are bad bounding volumes for my use because I have a lot of flat and long geometries. Iterating calls to the function to get a smaller and smaller error in the camera position. This has worked somewhat, but I can sometimes run into weird edge cases where the camera position overshoots too much and the error factor increases. Also, when doing this I didn't recenter the model based on the position of the bounding rectangle. I couldn't find a solid, robust way to do that reliably. Help please!

    Read the article

  • Finding the width of a directed acyclic graph... with only the ability to find parents

    - by Platinum Azure
    Hi guys, I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list. The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table. That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node. I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed. Thanks! EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.

    Read the article

  • Android SDK not recognizing debug-able device.

    - by kal.zekdor
    I'm new to Android development, and am attempting to run a test application on my actual device. I followed the instructions at http://developer.android.com/guide/developing/device.html (and related links), but the Android Debug Bridge (adb) doesn't recognize my connected device. Some quick background info, I'm running WinXP, developing with Eclipse, with a Motorola Droid running Android 2.1 as my physical device. An overview of the steps I've taken: Installed the Android SDK, downloading all necessary packages. Enabled USB Debugging on my device. Connected Device via USB, installing the driver from the SDK folder. I'll stop here (though I continued to setup my application to be debug-able in Eclipse), because I at this point I noticed a problem. Running "sdk\tools\adb devices" at this point (at least, by my understanding), should list my device as connected. However, running this yields only: List of devices attached My device recognizes that it's connected to a computer in debug mode, and my computer recognizes the device. However, I can't seem to get the sdk to recognize it. I'll leave out the steps I used to setup Eclipse for debugging on a device, as it doesn't seem relevant to the problem. I'll include them if requested. If anyone has any ideas, I'd greatly appreciate some assistance. Thanks in advance for your time.

    Read the article

  • What algorithm can I use to determine points within a semi-circle?

    - by khayman218
    I have a list of two-dimensional points and I want to obtain which of them fall within a semi-circle. Originally, the target shape was a rectangle aligned with the x and y axis. So the current algorithm sorts the pairs by their X coord and binary searches to the first one that could fall within the rectangle. Then it iterates over each point sequentially. It stops when it hits one that is beyond both the X and Y upper-bound of the target rectangle. This does not work for a semi-circle as you cannot determine an effective upper/lower x and y bounds for it. The semi-circle can have any orientation. Worst case, I will find the least value of a dimension (say x) in the semi-circle, binary search to the first point which is beyond it and then sequentially test the points until I get beyond the upper bound of that dimension. Basically testing an entire band's worth of points on the grid. The problem being this will end up checking many points which are not within the bounds.

    Read the article

  • Big-O for calculating all routes from GPS data

    - by HH
    A non-critical GPS module use lists because it needs to be modifiable, new routes added, new distances calculated, continuos comparisons. Well so I thought but my team member wrote something I am very hard to get into. His pseudo code int k =0; a[][] <- create mapModuleNearbyDotList -array //CPU O(n) for(j = 1 to n) // O(nlog(m)) for(i =1 to n) for(k = 1 to n) if(dot is nearby) adj[i][j]=min(adj[i][j], adj[i][k] + adj[k][j]); His ideas transformations of lists to tables His worst case time complexity is O(n^3), where n is number of elements in his so-called table. Exception to the last point with Finite structure: O(mlog(n)) where n is number of vertices and m is the amount of neighbour vertices. Questions about his ideas why to waste resources to transform constantly-modified lists to table? Fast? only point where I to some extent agree but cannot understand the same upper limits n for each for-loops -- perhaps he supposed it circular why does the code take O(mlog(n)) to proceed in time as finite structure? The term finite may be wrong, explicit?

    Read the article

  • C# to unmanaged dll data structures interop

    - by Shane Powell
    I have a unmanaged DLL that exposes a function that takes a pointer to a data structure. I have C# code that creates the data structure and calls the dll function without any problem. At the point of the function call to the dll the pointer is correct. My problem is that the DLL keeps the pointer to the structure and uses the data structure pointer at a later point in time. When the DLL comes to use the pointer it has become invalid (I assume the .net runtime has moved the memory somewhere else). What are the possible solutions to this problem? The possible solutions I can think of are: Fix the memory location of the data structure somehow? I don't know how you would do this in C# or even if you can. Allocate memory manually so that I have control over it e.g. using Marshal.AllocHGlobal Change the DLL function contract to copy the structure data (this is what I'm currently doing as a short term change, but I don't want to change the dll at all if I can help it as it's not my code to begin with). Are there any other better solutions?

    Read the article

  • How do you use stl's functions like for_each?

    - by thomas-gies
    I started using stl containers because they came in very handy when I needed functionality of a list, set and map and had nothing else available in my programming environment. I did not care much about the ideas behind it. STL documentations were only interesting up to the point where it came to functions, etc. Then I skipped reading and just used the containers. But yesterday, still being relaxed from my holidays, I just gave it a try and wanted to go a bit more the stl way. So I used the transform function (can I have a little bit of applause for me, thank you). From an academic point of view it really looked interesting and it worked. But the thing that boroughs me is that if you intensify the use of those functions, you need 10ks of helper classes for mostly everything you want to do in your code. The hole logic of the program is sliced in tiny pieces. This slicing is not the result of god coding habits. It's just a technical need. Something, that makes my life probably harder not easier. And I learned the hard way, that you should always choose the simplest approach that solves the problem at hand. And I can't see what, for example, the for_each function is doing for me that justifies the use of a helper class over several simple lines of code that sit inside a normal loop so that everybody can see what is going on. I would like to know, what you are thinking about my concerns? Did you see it like I do when you started working this way and have changed your mind when you got used to it? Are there benefits that I overlooked? Or do you just ignore this stuff as I did (and will go an doing it, probably). Thanks. PS: I know that there is a real for_each loop in boost. But I ignore it here since it is just a convenient way for my usual loops with iterators I guess.

    Read the article

  • Are there any downsides in using C++ for network daemons?

    - by badcat
    Hey guys! I've been writing a number of network daemons in different languages over the past years, and now I'm about to start a new project which requires a new custom implementation of a properitary network protocol. The said protocol is pretty simple - some basic JSON formatted messages which are transmitted in some basic frame wrapping to have clients know that a message arrived completely and is ready to be parsed. The daemon will need to handle a number of connections (about 200 at the same time) and do some management of them and pass messages along, like in a chat room. In the past I've been using mostly C++ to write my daemons. Often with the Qt4 framework (the network parts, not the GUI parts!), because that's what I also used for the rest of the projects and it was simple to do and very portable. This usually worked just fine, and I didn't have much trouble. Being a Linux administrator for a good while now, I noticed that most of the network daemons in the wild are written in plain C (of course some are written in other languages, too, but I get the feeling that 80% of the daemons are written in plain C). Now I wonder why that is. Is this due to a pure historic UNIX background (like KISS) or for plain portability or reduction of bloat? What are the reasons to not use C++ or any "higher level" languages for things like daemons? Thanks in advance! Update 1: For me using C++ usually is more convenient because of the fact that I have objects which have getter and setter methods and such. Plain C's "context" objects can be a real pain at some point - especially when you are used to object oriented programming. Yes, I'm aware that C++ is a superset of C, and that C code is basically C++. But that's not the point. ;)

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >