Search Results

Search found 7513 results on 301 pages for 'actual'.

Page 123/301 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • maxItemsInObjectGraph limit required to be changed for server and client

    - by Michael Freidgeim
    We have a wcf service, that expects to return a huge XML data. It worked ok in testing, but in production it failed with error  "Maximum number of items that can be serialized or deserialized in an object graph is '65536'. Change the object graph or increase the MaxItemsInObjectGraph quota."The MSDN article about   dataContractSerializer xml configuration  element  correctly  describes maxItemsInObjectGraph attribute default as 65536, but documentation for of the DataContractSerializer.MaxItemsInObjectGraph property and DataContractJsonSerializer.MaxItemsInObjectGraph Property are talking about Int32.MaxValue, which causes confusion, in particular because Google shows properties articles before configuration articles.When we changed the value in WCF service configuration, it didn't help, because the similar change must be ALSO done on client.There are similar posts:http://stackoverflow.com/questions/6298209/how-to-fix-maxitemsinobjectgraph-error/6298356#6298356You need to set the MaxItemsInObjectGraph on the dataContractSerializer using a behavior on both the client and service. See  for an example.http://devlicio.us/blogs/derik_whittaker/archive/2010/05/04/setting-maxitemsinobjectgraph-for-wcf-there-has-to-be-a-better-way.aspxhttp://stackoverflow.com/questions/2325321/maxitemsinobjectgraph-ignored/4455209#4455209 I had forgot to place this setting in my client app.config file.http://stackoverflow.com/questions/9191167/maximum-number-of-items-that-can-be-serialized-or-deserialized-in-an-object-graphttp://stackoverflow.com/questions/5867304/datacontractjsonserializer-and-maxitemsinobjectgraph?rq=1 -It seems that DataContractJsonSerializer.MaxItemsInObjectGraph has actual default 65536, because there is no configuration for JSON serializer, but  it complains about the limit.I believe that MS should clarify the properties documentation re default limit and make more specific error messages to distinguish server side and client side errors.Note, that as a workaround it's possible to use commonBehaviors section which can be defined only in machine.config:<commonBehaviors> <behaviors> <endpointBehaviors> <dataContractSerializer maxItemsInObjectGraph="..." /> </endpointBehaviors> </behaviors></commonBehaviors>v

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • Why do VMs need to be "stack machines" or "register machines" etc.?

    - by Prog
    (This is an extremely newbie-ish question). I've been studying a little about Virtual Machines. Turns out a lot of them are designed very similarly to physical or theoretical computers. I read that the JVM for example, is a 'stack machine'. What that means (and correct me if I'm wrong) is that it stores all of it's 'temporary memory' on a stack, and makes operations on this stack for all of it's opcodes. For example, the source code 2 + 3 will be translated to bytecode similar to: push 2 push 3 add My question is this: JVMs are probably written using C/C++ and such. If so, why doesn't the JVM execute the following C code: 2 + 3..? I mean, why does it need a stack, or in other VMs 'registers' - like in a physical computer? The underlying physical CPU takes care of all of this. Why don't VM writers simply execute the interpreted bytecode with 'usual' instructions in the language the VM is programmed with? Why do VMs need to emulate hardware, when the actual hardware already does this for us? Again, very newbie-ish questions. Thanks for your help

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • ADF - Now with Robots!

    - by Duncan Mills
    I mentioned this briefly in a tweet the other day, just before the full rush of OOW really kicked off, so I though it was worth re-visiting. Check out this video, and then read on: So why so interesting? Well - you probably guessed from the title, ADF is involved. Indeed this is as about as far from the traditional ADF data entry application as you can get. Instead of a database at the back-end there's basically a robot. That's right, this remarkable tape drive is controlled through an ADF using all your usual friends of ADF Faces, Controller and Binding (but no ADFBC for obvious reasons). ADF is used both on the touch screen you see on the front of the device in the video, and also for the remote management console which provides a visual representation of the slots and drives. The latter uses ADF's Active Data Framework to provide a real-time view of what's going on the rack. . What's even more interesting (for the techno-geeks) is the fact that all of this is running out of flash storage on a ridiculously small form factor with tiny processor - I probably shouldn't reveal the actual specs but take my word for it, don't complain about the capabilities of your laptop ever again! This is a project that I've been personally involved in and I'm pumped to see such a good result and,  I have to say, those hardware guys are great to work with (and have way better toys on their desks than we do). More info in the SL150 (should you feel the urge to own one) is here. 

    Read the article

  • Dell Powerdge840 2.4GHz 64 bit quad core

    - by newb64bit
    I am having an issue, where I have changed the boot order to cdrom and turned off hd boot all together and still my system is unable to detect ubuntu and claims, no boot device found. Some additional information: When this same cd is inserted and dell is booted into win 2003 server (which is what is installed on this machine), it detects the cd drive but not the cd at all (keeps asking me to insert disc) I have also created a bootable flash drive using LinuxLive USB creator and when this is selected in boot order again am told no boot device detected. I was speaking to dell and they suggested perhaps there are no drivers on the actual ubuntu installation for the hardware on this Dell and hence the failure of this dell to detect the ubuntu cd. Now, I don't know too much about computers, but this last bit confused me a bit. If the system detects the hardware (when it is booting it sees the cd rom and in bios it sees when the flash drive is connected), then shouldn't it be able to read what is on those drives? However, if there is some firmware or software install that needs to happen, could someone please tell me where to find the correct drivers for ubuntu and dell poweredge to work together? Shall I be installing the desktop version or the server edition, also, 32 bit or 63 bit? Thank you in advance.

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • A quick tip for those working with the Windows Phone 7 AD SDK.

    - by mbcrump
    One thing that I’ve noticed in several apps in the Windows Phone 7 marketplace is the ad chopping off on the right hand side. I decided that my next Windows Phone 7 app will be ad supported so why not sign up for the Advertising SDK and investigate this issue. *Note: If you want to see this in an actual app then download the free app called “Road Rage”. So here is an example of what I am talking about: You will notice that the right hand side of the AD is chopped off using the default ad banner. You can see the border on the left hand side clearly. So, what exactly is going on? Let’s take a look at this in the designer. From this image we can see it clearer, the margin of the grid that the ad is contained in needs to be removed. By default, the ContentPanel in a Windows Phone Page has a margin already set on it. See below for an example of this: <!--ContentPanel - place additional content here--> <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> </Grid> If you simply remove that margin then your ad will display properly as shown below. It’s strange that I’ve seen this in multiple WP7 applications in the marketplace. If you are trying to make money off Ads, you would probably want to make sure the full ad is displayed. I am hoping this short post helped someone.   Subscribe to my feed

    Read the article

  • Classes as a compilation unit

    - by Yannbane
    If "compilation unit" is unclear, please refer to this. However, what I mean by it will be clear from the context. Edit: my language allows for multiple inheritance, unlike Java. I've started designing+developing my own programming language for educational, recreational, and potentially useful purposes. At first, I've decided to base it off Java. This implied that I would have all the code be written inside classes, and that code compiles to classes, which are loaded by the VM. However, I've excluded features such as interfaces and abstract classes, because I found no need for them. They seemed to be enforcing a paradigm, and I'd like my language not to do that. I wanted to keep the classes as the compilation unit though, because it seemed convenient to implement, familiar, and I just liked the idea. Then I noticed that I'm basically left with a glorified module system, where classes could be used either as "namespaces", providing constants and functions using the static directive, or as templates for objects that need to be instantiated ("actual" purpose of classes in other languages). Now I'm left wondering: what are the benefits of having classes as compilation units? (Also, any general commentary on my design would be much appreciated.)

    Read the article

  • In a specification, should I describe what a product does (ideally) or what it should/must do?

    - by Arlaud Pierre
    I'm writting a German specification (I'm not German). Differences may appear for this process in different cultures, especially in the terminology, but usually here's the idea: The client writes his needs and wishes in a document, called a scope statement or requirements document. The supplier tries to understand the actual need of the client (which might be different to what was written and to what the client meant to say and to what the client thinks he needs, etc.) The supplier writes a specification for the product, which should fill the client's need. The specification needs to be precise enough for the product to be made (ambiguity problems occur). The client and the supplier can check whether they have understood each other, and discuss details of the product. The client agrees with the specification (or at least its current iteration) and the supplier is ready to start the work. (it may of course be expected of you to disagree with this process, but this is irrelevant to my problem): I'm now somewhere around the last two steps and I've been criticized because I wrote what the product must do, and not what it will do ideally. Usually along the lines of The product must be able to perform task A And I was expected to write The product performs task A This is a simple word play, but I feel saying what the product does, while the product isn't even on the way to be made yet, is wrong. I would tend to consider a specification as a contract of what the product is expected to do (what it must do and how it should do it), and not what it does. Said differently, I feel this is the specification and not the manual of the end product…… Should I say what the product must do or what it does?

    Read the article

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • Customized Computer Science Degree - What other field would mesh well with computer science?

    - by sailtheworld
    So here's my situation: I have seven years of experience with web development. I can do PHP, MySQL, OOP, all of that stuff. I would like to make the argument that I have enough technical experience to go out in the real world and get a well-paying, full-time job if I were to drop out right now (I've had a number of job offers recently, and I have already gotten a lot of actual job experience), but I would like to stay in school and get a degree for a number of reasons ranging from the social aspects to the fact that I just want to have a BS in one thing or another as it seems to be important to have one for a lot of jobs, even when it doesn't have anything to do with the job. With that said, it makes little sense for me to major in Computer Science, because that would be like studying everything I already know. I don't want to major in something COMPLETELY different, because that would be contrary to my career goals. I am considering trying to find some interdisciplinary, customized degree of sorts that allows me to combine my current skills with a new education. I'm thinking maybe buisness or even psychology (interface design?). Could I get some ideas for what to major in and tips on who I might talk to? Thanks!

    Read the article

  • How do 2D physics engines solve the problem of resolving collisions along tiled walls/floors in non-grid-based worlds?

    - by ssb
    I've been working on implementing my SAT algorithm which has been coming along well, but I've found that I'm at a wall when it comes to its actual use. There are plenty of questions regarding this issue on this site, but most of them either have no clear, good answer or have a solution based on checking grid positions. To restate the problem that I and many others are having, if you have a tiled surface, like a wall or a floor, consisting of several smaller component rectangles, and you traverse along them with another rectangle with force being applied into that structure, there are cases where the object gets caught on a false collision on an edge that faces the inside of the shape. I have spent a lot of time thinking about how I could possibly solve this without having to resort to a grid-based system, and I realized that physics engines do this properly. What I want to know is how they do this. What do physics engines do beyond basic SAT that allows this kind of proper collision resolution in complex environments? I've been looking through the source code to Box2D trying to find out how they do it but it's not quite as easy as looking at a Collision() method. I think I'm not good enough at physics to know what they're doing mathematically and not good enough at programming to know what they're doing programmatically. This is what I aim to fix.

    Read the article

  • Spritegroups and colorkeys

    - by Fristi
    I have a problem using spritegroups in pygame. In my situation I have 2 spritegroups, one for humans, one for "infected". A human is represented by a blue circle: image = pygame.Surface((32,32)) image.fill((255,255,255)) pygame.draw.circle(image,(0,0,255),(16,16),16) image = image.convert() image.set_colorkey((255,255,255)) An infected by a red one (same code, different color). I update my spritegroups as follows: self.humans.clear(self.screen, self.bg) self.humans.update(time_passed) self.humans.draw(self.screen) self.infected.clear(self.screen, self.bg) self.infected.update(time_passed) self.infected.draw(self.screen) Self.bg is defined: self.bg = pygame.Surface((SCREEN_WIDTH, SCREEN_HEIGHT)) self.bg.fill((255,255,255)) self.bg.convert() This all works, except that when a red circle overlaps with a blue one, you can see the white corners of the bounding box around the actual circle. Within a spritegroup it works, using the set_colorkey function. This does not happen with overlapping blue circles or overlapping red circles. I tried adding a colorkey to self.bg but that did not work. Same for adding a colorkey to self.screen.

    Read the article

  • Is there a proven concept to website reverse certificate authentication?

    - by Tom
    We're looking at exposing some of our internal application data externally via a website. The actual details of the website aren't that interesting, it'll be built using ASP.NET/IIS etc, that might be relevant. With this, I'm essentially I'm looking for a mechanism to authenticate users viewing my website. This sounds trivial, a username/password is typically fine, but I want more. Now I've read plenty about SSL/x.509 to realise that the CA determines that we're alright, and that the user can trust us. But I want to trust the user, I want the user to be rejected if they don't have the correct credentials. I've seen a system for online banking whereby the bank issues a certificate which gets installed on the users' computer (it was actually smartcard based). If the website can't discover/utilise the key-pair then you are immediately rejected! This is brutal, but necessary. Is there a mechanism where I can do the following: Generate a certificate for a user Issue the certificate for them to install, it can be installed on 1 machine If their certificate is not accessible, they are denied all access A standard username/password scheme is then used after that SSL employed using their certificate once they're "in" This really must already exist, please point me in the right direction! Thanks for your help :)

    Read the article

  • Can I eventually consider myself a professional developer if I don't have a CS degree? [on hold]

    - by heltonbiker
    Question first, context later: If I am a dedicated, self-taught programmer, always seeking top-quality books AND READING THEM, while successfully applying all that new knowledge into my current work, could I call myself (and offer my work as) a PROFESSIONAL developer? How limiting (or how common) that is nowadays? I am afraid that, no matter how hard I study and practice, it could be too difficult to compete with "actual", college graduated developers, and potential employers might have doubts investing in an "undergraduated" person. Now, context: My former profession is from healthcare sector, then I studied mechanical engineering (quit in the middle), then I studied product design (master degree), and I ended up working (very happily) at an engineering company that manufactures medical devices. For more than two years now my main activity in this company is software development. The devices contain software, and we gave up hiring software development (domain knowledge needed, too much communication cost). My current company sees a lot of value in what I do, but I cannot afford the risk of depending on this single company for the rest of my life, you get it. But a lot of job offers require some minimal formal education, usually a CS degree. Fact is that I am sure this is my target profession, I don't plan to go to another area, it is a pleasure to dive into books that normal people would consider unreadable, but I'm 36 years old and can't see going back to college as a viable alternative.

    Read the article

  • A* PathFinding Not Consistent

    - by RedShft
    I just started trying to implement a basic A* algorithm in my 2D tile based game. All of the nodes are tiles on the map, represented by a struct. I believe I understand A* on paper, as I've gone through some pseudo code, but I'm running into problems with the actual implementation. I've double and tripled checked my node graph, and it is correct, so I believe the issue to be with my algorithm. This issue is, that with the enemy still, and the player moving around, the path finding function will write "No Path" an astounding amount of times and only every so often write "Path Found". Which seems like its inconsistent. This is the node struct for reference: struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } Here is the rest: http://pastebin.com/cCHfqKTY This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • 1 ASPX Page, Multiple Master Pages

    - by csmith18119
    So recently I had an ASPX page that could be visited by two different user types.  User type A would use Master Page 1 and user type B would use Master Page 2.  So I put together a proof of concept to see if it was possible to change the MasterPage in code.  I found a great article on the Microsoft ASP.net website. Specifying the Master Page Programmatically (C#) by Scott Mitchell So I created a MasterPage call Alternate.Master to act as a generic place holder.  I also created a Master1.Master and a Master2.Master.  The ASPX page, Default.aspx will use this MasterPage.  It will also use the Page_PreInit event to programmatically set the MasterPage.  1: protected void Page_PreInit(object sender, EventArgs e) { 2: var useMasterPage = Request.QueryString["use"]; 3: if (useMasterPage == "1") 4: MasterPageFile = "~/Master1.Master"; 5: else if (useMasterPage == "2") 6: MasterPageFile = "~/Master2.Master"; 7: }   In my Default.aspx page I have the following links in the markup: 1: <p> 2: <asp:HyperLink runat="server" ID="cmdMaster1" NavigateUrl="~/Default.aspx?use=1" Text="Use Master Page 1" /> 3: </p> 4: <p> 5: <asp:HyperLink runat="server" ID="cmdMaster2" NavigateUrl="~/Default.aspx?use=2" Text="Use Master Page 2" /> 6: </p> So the basic idea is when a user clicks the HyperLink to use Master Page 1, the default.aspx.cs code behind will set the property MasterPageFile to use Master1.Master.  The same goes with the link to use Master Page 2.  It worked like a charm!  To see the actual code, feel free to download a copy here: Project Name: Skyhook.MultipleMasterPagesWeb http://skyhookprojectviewer.codeplex.com

    Read the article

  • How to do integrated testing?

    - by Enthusiastic Programmer
    So I have been reading up on a lot of books surrounding testing. But all the books I've read have the same flaws. They will all tell you the definitions of testing. But I have not found a single book that will guide you into integration testing (or pretty much anything higher then unit testing). Is integration testing that elusive or am I reading the wrong books? I'm a hands on person, so I would appreciate it if someone could help me with a simple program: Let's say you need to make some sort of calculation program that calculates something (doesn't matter what) and exports it to *.txt file. Let's assume we use the Model View Controller design principle. And one class for the actual calculating which you'll use in the model and one for writing the textfile. So: View = Controller = Model = CalculationClass, FileClass So for unittesting: You'd test the calculationClass, I'd personally focus most of my unit tests there. And less time on unit testing the view/controller/FileClass. I personally wouldn't see the use of unittesting those unless you want a really robust program. Integration testing: Now this is where I run into a wall. What would I have to test to call it an integration test? I could stub the view and feed the controller data which it would pass on to the model and so forth. And then check what the view gets back in the end. But ... Couldn't I just run the (in this case small) program then and test it manually? Would this be considered a integration test too, or does it have to be automated? Also, can I check multiple items to see if they are correct? I cannot seem to find any book that offers a hands on approach to methods of integration testing.

    Read the article

  • How do I make cars on a one-dimensional track avoid collisions?

    - by user990827
    Using three.js, I use a simple spline to represent a road. Cars can only move forward on the spline. A car should be able to slow-down behind a slow moving car. I know how to calculate the distance between 2 cars, but how to calculate the proper speed in each game update? At the moment I simply do something like this: this.speed += (this.maxSpeed - this.speed) * 0.02; // linear interpolation to maxSpeed // the position on the spline (0.0 - 1.0) this.position += this.speed / this.road.spline.getLength(); This works. But how to implement the slow-down part? // transform from floats (0.0 - 1.0) into actual units var carInFrontPosition = carInFront.position * this.road.spline.getLength(); var myPosition = this.position * this.road.spline.getLength(); var distance = carInFrontPosition - myPosition; // WHAT TO DO HERE WITH THE DISTANCE? // HOW TO CALCULATE MY NEW SPEED? Obviously I have to somehow take current speed of the cars into account for calculation. Besides different maxSpeeds, I want each car to also have a different mass (causing it to accelerate slower/faster). But this mass has to be then also taken into account for braking (slowing down) so they don't crash into each other.

    Read the article

  • Any good reason to open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • What is the best way to manage large 3d worlds (i.e minecraft style)?

    - by SomeXnaChump
    After playing minecraft I was marvelling a bit at their large worlds but at the same time finding it extremely slow to navigate, even with a quad core and meaty graphics card. Now I assume its fairly slow because: A) Its written in Java, and as most of the actual spatial partitioning and other memory management activities happen in there it would be slower than a native C++ version. B) They are not partitioning their world very well I could be wrong on both assumptions, however it got me thinking about the best way to manage large worlds. As it is more of a true 3d world, where a block can exist in any part of the world, it is basically a big 3d array [x][y][z], where each block in the world has a type (i.e BlockType.Empty = 0, BlockType.Dirt = 1 etc). Now I am assuming to make this sort of world performant you would need to: a) Use a tree of some variety (oct/kd/bsp) to split all the cubes out, it seems like an oct/kd would be the better option as you can just partition on a per cube level not a per triangle level. b) Use some algorithm to work out if the blocks within the scene can currently be seen, as blocks closer to the user could obfuscate the blocks behind, making it pointless to render them. c) Keep the block object themselves lightweight, so it is quick to add and remove them from the trees I guess there is no right answer to this, but I would be interested to see peoples opinions on the subject.

    Read the article

  • PHP hosting some info required [closed]

    - by mtk
    I have recently given a control of newly bought hosting space and the domain account. There is a technical team from the hosting site to help out with problems, but that is a long process, i.e. log a ticket, wait for a long time, and I don't get the correct answer in the first shot. I was wondering, if anyone has any helpful guide and how one must go with hosting a site. Any info that must be know w.r.t to cpanel? Any other useful stuff if any one has, or could point me to ? Just to give a few difficulties: The same php code working well on local machine, giving error on remote as "File not found". The file is present indeed as I have ftp'ed all the files correctly. session_start error are outputted to html page with warning "Header already sent". and many more technical things, that work well on local but not on actual hosting server. So, if anyone has any helpful stuff in this reference, as to what all changes are required or what a programmer must be aware from a hosting perspective, please let me know. Note I am hosting a PHP site with mysql db, on a shared environment.

    Read the article

  • The best programmer is N times more effective than the worst? Who Cares?

    - by StevenWilkins
    There is a latent belief in programming that the best programmer is N times more effective than the worst. Where N is usually between 10 and 100. Here are some examples: http://www.devtopics.com/programmer-productivity-the-tenfinity-factor/ http://www.joelonsoftware.com/articles/HighNotes.html http://haacked.com/archive/2007/06/25/understanding-productivity-differences-between-developers.aspx There is some debate as to whether or not it's been proven: http://morendil.github.com/folklore.html I'm confident in the accuracy of these statements: The best salesmen in the world are probably 10-100 times better than the worst The best drivers in the world are probably 10-100 times better than the worst The best soccer players in the world are probably 10-100 times better than the worst The best CEOs in the world are probably 10-100 times better than the worst In some cases, I'm sure the difference is greater. In fact, you could probably say that The best [insert any skilled profession here] in the world are probably 10-100 times better than the worst We don't know what N is for the rest of these professions, so why concern ourselves with what the actual number is for programming? Can we not just say that the number is large enough so that it's very important to hire the best people and move on already?

    Read the article

  • Is this JS code a good way for defining class with private methods?

    - by tigrou
    I was recently browsing a open source JavaScript project. The project is a straight port from another project in C language. It mostly use static methods, packed together in classes. Most classes are implemented using this pattern : Foo = (function () { var privateField = "bar"; var publicField = "bar";     function publicMethod() { console.log('this is public');     } function privateMethod() { console.log('this is private'); } return {   publicMethod : publicMethod, publicField : publicField }; })(); This was the first time I saw private methods implemented that way. I perfectly understand how it works, using a anonymous method. Here is my question : is this pattern a good practice ? What are the actual limitations or caveats ? Usually i declare my JavaScript classes like that : Foo = new function () { var privateField = "test"; this.publicField = "test";     this.publicMethod = function()     { console.log('this method is public'); privateMethod();     } function privateMethod() { console.log('this method is private'); } }; Other than syntax, is there any difference with the pattern show above ?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >