Search Results

Search found 16706 results on 669 pages for 'non blocking io'.

Page 27/669 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric 2D game with moderate-scale multiplayer, approximately 20-30 players connected at once to a persistent server. I've had some difficulty getting a good movement prediction implementation in place. Physics/Movement The game doesn't have a true physics implementation, but uses the basic principles to implement movement. Rather than continually polling input, state changes (ie/ mouse down/up/move events) are used to change the state of the character entity the player is controlling. The player's direction (ie/ north-east) is combined with a constant speed and turned into a true 3D vector - the entity's velocity. In the main game loop, "Update" is called before "Draw". The update logic triggers a "physics update task" that tracks all entities with a non-zero velocity uses very basic integration to change the entities position. For example: entity.Position += entity.Velocity.Scale(ElapsedTime.Seconds) (where "Seconds" is a floating point value, but the same approach would work for millisecond integer values). The key point is that no interpolation is used for movement - the rudimentary physics engine has no concept of a "previous state" or "current state", only a position and velocity. State Change and Update Packets When the velocity of the character entity the player is controlling changes, a "move avatar" packet is sent to the server containing the entity's action type (stand, walk, run), direction (north-east), and current position. This is different from how 3D first person games work. In a 3D game the velocity (direction) can change frame to frame as the player moves around. Sending every state change would effectively transmit a packet per frame, which would be too expensive. Instead, 3D games seem to ignore state changes and send "state update" packets on a fixed interval - say, every 80-150ms. Since speed and direction updates occur much less frequently in my game, I can get away with sending every state change. Although all of the physics simulations occur at the same speed and are deterministic, latency is still an issue. For that reason, I send out routine position update packets (similar to a 3D game) but much less frequently - right now every 250ms, but I suspect with good prediction I can easily boost it towards 500ms. The biggest problem is that I've now deviated from the norm - all other documentation, guides, and samples online send routine updates and interpolate between the two states. It seems incompatible with my architecture, and I need to come up with a better movement prediction algorithm that is closer to a (very basic) "networked physics" architecture. The server then receives the packet and determines the players speed from it's movement type based on a script (Is the player able to run? Get the player's running speed). Once it has the speed, it combines it with the direction to get a vector - the entity's velocity. Some cheat detection and basic validation occurs, and the entity on the server side is updated with the current velocity, direction, and position. Basic throttling is also performed to prevent players from flooding the server with movement requests. After updating its own entity, the server broadcasts an "avatar position update" packet to all other players within range. The position update packet is used to update the client side physics simulations (world state) of the remote clients and perform prediction and lag compensation. Prediction and Lag Compensation As mentioned above, clients are authoritative for their own position. Except in cases of cheating or anomalies, the client's avatar will never be repositioned by the server. No extrapolation ("move now and correct later") is required for the client's avatar - what the player sees is correct. However, some sort of extrapolation or interpolation is required for all remote entities that are moving. Some sort of prediction and/or lag-compensation is clearly required within the client's local simulation / physics engine. Problems I've been struggling with various algorithms, and have a number of questions and problems: Should I be extrapolating, interpolating, or both? My "gut feeling" is that I should be using pure extrapolation based on velocity. State change is received by the client, client computes a "predicted" velocity that compensates for lag, and the regular physics system does the rest. However, it feels at odds to all other sample code and articles - they all seem to store a number of states and perform interpolation without a physics engine. When a packet arrives, I've tried interpolating the packet's position with the packet's velocity over a fixed time period (say, 200ms). I then take the difference between the interpolated position and the current "error" position to compute a new vector and place that on the entity instead of the velocity that was sent. However, the assumption is that another packet will arrive in that time interval, and it's incredibly difficult to "guess" when the next packet will arrive - especially since they don't all arrive on fixed intervals (ie/ state changes as well). Is the concept fundamentally flawed, or is it correct but needs some fixes / adjustments? What happens when a remote player stops? I can immediately stop the entity, but it will be positioned in the "wrong" spot until it moves again. If I estimate a vector or try to interpolate, I have an issue because I don't store the previous state - the physics engine has no way to say "you need to stop after you reach position X". It simply understands a velocity, nothing more complex. I'm reluctant to add the "packet movement state" information to the entities or physics engine, since it violates basic design principles and bleeds network code across the rest of the game engine. What should happen when entities collide? There are three scenarios - the controlling player collides locally, two entities collide on the server during a position update, or a remote entity update collides on the local client. In all cases I'm uncertain how to handle the collision - aside from cheating, both states are "correct" but at different time periods. In the case of a remote entity it doesn't make sense to draw it walking through a wall, so I perform collision detection on the local client and cause it to "stop". Based on point #2 above, I might compute a "corrected vector" that continually tries to move the entity "through the wall" which will never succeed - the remote avatar is stuck there until the error gets too high and it "snaps" into position. How do games work around this?

    Read the article

  • how to continuously send data without blocking?

    - by Donal Rafferty
    I am trying to send rtp audio data from my Android application. I currently can send 1 RTP packet with the code below and I also have another class that extends Thread that listens to and receives RTP packets. My question is how do I continuously send my updated buffer through the packet payload without blocking the receiving thread? public void run() { isRecording = true; android.os.Process.setThreadPriority (android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); int buffersize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT); Log.d("BUFFERSIZE","Buffer size = " + buffersize); arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize); short[] readBuffer = new short[80]; byte[] buffer = new byte[160]; arec.startRecording(); while(arec.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ int frames = arec.read(readBuffer, 0, 80); @SuppressWarnings("unused") int lenghtInBytes = codec.encode(readBuffer, 0, buffer, frames); RtpPacket rtpPacket = new RtpPacket(); rtpPacket.setV(2); rtpPacket.setX(0); rtpPacket.setM(0); rtpPacket.setPT(0); rtpPacket.setSSRC(123342345); rtpPacket.setPayload(buffer, 160); try { rtpSession2.sendRtpPacket(rtpPacket); } catch (UnknownHostException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (RtpException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } So when I send on one device and receive on another I get decent audio, but when I send and receive on both I get broken sound like its taking turns to send and receive audio. I have a feeling it could be to do with the while loop? it could be looping around in there and not letting anything else run?

    Read the article

  • java.io.IOException: Server returned HTTP response code: 403 for URL

    - by Adao
    I want to download the mp3 file from url : "http://upload13.music.qzone.soso.com/30671794.mp3", i always got java.io.IOException: Server returned HTTP response code: 403 for URL. But it's ok when open the url using browser. Below is part of my code: BufferedInputStream bis = null; BufferedOutputStream bos = null; try { URL url = new URL(link); URLConnection urlConn = url.openConnection(); urlConn.addRequestProperty("User-Agent", "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"); String contentType = urlConn.getContentType(); System.out.println("contentType:" + contentType); InputStream is = urlConn.getInputStream(); bis = new BufferedInputStream(is, 4 * 1024); bos = new BufferedOutputStream(new FileOutputStream( fileName.toString()));? Anyone could help me? Thanks in advance!

    Read the article

  • Measuring device drivers CPU/IO utilization caused by my program

    - by Lior Kogan
    Sometimes code can utilize device drivers up to the point where the system is unresponsive. Lately I've optimized a WIN32/VC++ code which made the system almost unresponsive. The CPU usage, however, was very low. The reason was 1000's of creations and destruction of GDI objects (pens, brushes, etc.). Once I refactored the code to create all objects only once - the system became responsive again. This leads me to the question: Is there a way to measure CPU/IO usage of device drivers (GPU/disk/etc) for a given program / function / line of code?

    Read the article

  • Downloading javascript Without Blocking

    - by doug
    The context: My question relates to improving web-page loading performance, and in particular the effect that javascript has on page-loading (resources/elements below the script are blocked from downloading/rendering). This problem is usually avoided/mitigated by placing the scripts at the bottom (eg, just before the tag). The code i am looking at is for web analytics. Placing it at the bottom reduces its accuracy; and because this script has no effect on the page's content, ie, it's not re-writing any part of the page--i want to move it inside the head. Just how to do that without ruining page-loading performance is the crux. From my research, i've found six techniques (w/ support among all or most of the major browsers) for downloading scripts so that they don't block down-page content from loading/rendering: (i) XHR + eval(); (ii) XHR + 'inject'; (iii) download the HTML-wrapped script as in iFrame; (iv) setting the script tag's 'async' flag to 'TRUE' (HTML 5 only); (v) setting the script tag's 'defer' attribute; and (vi) 'Script DOM Element'. It's the last of these i don't understand. The javascript to implement the pattern (vi) is: (function() { var q1 = document.createElement('script'); q1.src = 'http://www.my_site.com/q1.js' document.documentElement.firstChild.appendChild(q1) })(); Seems simple enough: inside this anonymous function, a script element is created, its 'src' element is set to it's location, then the script element is added to the DOM. But while each line is clear, it's still not clear to me how exactly this pattern allows script loading without blocking down-page elements/resources from rendering/loading?

    Read the article

  • IWAB0399E Error in generating Java from WSDL: java.io.IOException: ERROR: Missing <soap:fault> elem

    - by DanO
    I have a WCF 4.0 service for internal use. Another team is trying to consume it in Java. IWAB0399E Error in generating Java from WSDL: java.io.IOException: ERROR: Missing <soap:fault> element inFault "PasswordReuseFaultFault" ... One source suggests it may be a Soap 1.1 vs. Soap 1.2 issue Indeed my WCF generated WSDL <wsdl:fault name="PasswordReuseFaultFault"> <wsp:PolicyReference URI="#blah_blah_blah_PasswordReuseFaultFault_Fault"/> <soap12:fault name="PasswordReuseFaultFault" use="literal"/> </wsdl:fault> notice the <soap12:fault>instead of the expected <soap:fault> I'm pretty sure that is the cause of the problem. How do I get WCF to generate soap 1.1 WSDL ? or What should I tell the Java team to do so their tools can understand the newer protocol?

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • HTTP server that handles requests through IO devices?

    - by StackedCrooked
    This question can probably only be answered for Unix-like systems that follow the "everything is a file" idiom. Would it be hard to create a web server that mounts local devices for handling http traffic? It would enable a program to read raw http requests from /dev/httpin (for example) and write the responses to /dev/httpout. I think this would be nice because it would allow me to create a web server from any programming language that is capable of handling IO streams. I don't really know where to start on this. Any suggestions on how to setup such a system?

    Read the article

  • Hardening non-root standalone Linux Tomcat install

    - by NoozNooz42
    I want to know if you have any tips as to how to strengthen the security of a non-root install of Tomcat in standalone mode once Tomcat is already installed in a non-root account, in standalone mode. I precise this because, for example, I'm not at all interested by the answers given here (because both Java and Tomcat requires root priviledges there to be installed and I've got zero interest in running jsvc): http://serverfault.com/questions/43765 So far, here's what I've done for my non-root standalone Tomcat 6 install: download and install the JRE .bin provided by Oracle/Sun (no need to be root here) (no need for a full JDK anymore right seen that Jasper [Tomcat's JSP engine] has its own compiler now right?) download and tar -xzf tomcat 6 (no need to be root here) set up transparent port-forwarding (must be root here) Note that my distribution is a Debian one and I have exactly zero interest in downloading Debian package / backports / whatever... Because, once again, I DO NOT want to need to be root to install Java & Tomcat. The only moment I needed to be root was to configure the firewall to transparently do the port forwarding 80 <-- 8080 and 443 <-- 8443. I then deleted all the default webapps but one: cd ~/apache-tomcat-6.0.26/webapps rm -rf docs rm -rf examples/ rm -rf manager/ rm -rf ROOT/ What about the directory ~/apache-tomcat-6.0.26/webapps/host-manager, do I need it or can I delete it? So, once I've installed Tomcat standalone in a non-root account (and taken into account that I don't want to enter the root password anymore and that I don't plan to install the whole Apache shebang), what more can I do? Are there connectors I can disable? (how?)

    Read the article

  • Ignore non-unicode programs language when installing software

    - by mitya
    This is something that is driving me nuts for a while and I haven't been able to find a solution for this problem anywhere. I am running Windows 7 and my "Language for non-Unicode programs" setting is set to Russian. I need for some non-unicode software that has a Russian UI. However, for most of my software I prefer to use the English UI. A lot of software out there is multilingual and is too smart for my liking. When installing, it switches the UI to Russian and the software UI stays in Russian after the installation without an option to change that, besides setting the "non-unicode language" to English. It switches back to Russian once I revert the setting and reboot. Most of the time it is driver software, i.e: Intel, HP, etc. How can force the installation to run English and stay that way after install, ignoring the "Language for non-Unicode programs" setting? Now, I understand this might be specific to the installer: MSI, Install Shield, etc. But any solution will be good, even if I have to apply it for every software installation. Thanks in advance for any helpful information!

    Read the article

  • What books should I read to be be able to communicate with programmers? [migrated]

    - by Zak833
    My experience is in online marketing, UI/UX and web design, but I know virtually no programming. I have recently been hired to build a new, fairly complex site from scratch, for which I will be working with an experienced programmer with whom I have worked extensively in the past. Although I have a decent understanding of certain technical concepts relating to web development, I would like to build a better appreciation of the programmer's craft, in order to improve communication with my programmer, as well as the client. I have heard Code Complete is quite a good book for this. Other than reading this and learning some basic programming, are there any other books or resources that could be recommended to the non-programmer who does not wish to become a programmer, yet wishes to understand the most common concepts involved in building software, web-based or otherwise?

    Read the article

  • Working with Sub-Optimal Disk Configurations (Making the best of what you’ve got)

    - by Jonathan Kehayias
    This is the first post in a what will be a series of posts on working with a sub-optimal disk configuration to squeeze as much performance out of it as possible.  You might ask what a Sub-Optimal Disk Configuration?  In this case it is a Dell Powervault MD3000 with 15 Seagate Barracuda ES.2 SAS 1 TB 7.2K RPM disks (Model Number ST31000640SS).  This equates to just under 14TB of raw storage that can configured into a number of RAID configurations.  In this case, the disk array...(read more)

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • responsibility for storage

    - by Stefano Borini
    A colleague and I were brainstorming about where to put the responsibility of an object to store itself on the disk in our own file format. There are basically two choices: object.store(file) fileformatWriter.store(object) The first one gives the responsibility of serialization on the disk to the object itself. This is similar to the approach used by python pickle. The second groups the representation responsibility on a file format writer object. The data object is just a plain data container (eventually with additional methods not relevant for storage). We agreed on the second methodology, because it centralizes the writing logic from generic data. We also have cases of objects implementing complex logic that need to store info while the logic is in progress. For these cases, the fileformatwriter object can be passed and used as a delegate, calling storage operations on it. With the first pattern, the complex logic object would instead accept the raw file, and implement the writing logic itself. The first method, however, has the advantage that the object knows how to write and read itself from any file containing it, which may also be convenient. I would like to hear your opinion before starting a rather complex refactoring.

    Read the article

  • How are Java ByteBuffer's limit and position variable's updated?

    - by Dummy Derp
    There are two scenarios: writing and reading Writing: Whenever I write something to the ByteBuffer by calling its put(byte[]) method the position variable is incremented as: current position + size of byte[] and limit stays at the max. If, however, I put the data in a view buffer then I will have to, manually, calculate and update the position Before I call the write(ByteBuffer) method of the channel to write something, I will have to flip() the Bytebuffer so that position points to zero and limit points to the last byte that was written to the ByteBuffer. Reading: Whenever I call the read(ByteBuffer) method of a channel to read something, the position variable stays at 0 and the limit variable of the ByteBuffer points to the last byte that was read. So, if the ByteBuffer is smaller than the file being read, the limit variable is pushed to max This means that the ByteBuffer is already flipped and I can proceed to extracting the values from the ByteBuffer. Please, correct me where I am wrong :)

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • Quick Tip - Speed a Slow Restore from the Transaction Log

    - by KKline
    Here's a quick tip for you: During some restore operations on Microsoft SQL Server, the transaction log redo step might be taking an unusually long time. Depending somewhat on the version and edition of SQL Server you've installed, you may be able to increase performance by tinkering with the readahead performance for the redo operations. To do this, you should use the MAXTRANSFERSIZE parameter of the RESTORE statement. For example, if you set MAXTRANSFERSIZE=1048576, it'll use 1MB buffers. If you...(read more)

    Read the article

  • iPhone in-app purchasing for Ecommerce [closed]

    - by Kyle B.
    This may not be the appropriate location for this, but would like to ask in the hopes an iOS developer with familiarity on the rules and regulations could comment. I would like to develop an iOS app that performs Ecommerce transactions. If I roll my own payment processor, and checkout process: 1) Is this allowed by Apple's rules, and 2) Would I be required to remit 30% of the transaction sale to Apple?

    Read the article

  • How many disk should I use to meed the capacity and IOPS need?

    - by facebook-100005613813158
    An application needs 1.6 TB of storage capacity and performs 1000 IOPS. How many disks are required to meet the application requirement and offer acceptable response time? The disk specifications are as follows: - Drive capacity = 100 GB - 15K RPM - Each disk can perform 50 IOPS 4 candidate , 10,12,16,20, which one is the most likely answer? in my opinion,16 disks can only meet the capacity need ,but cannot meet the IOPS need, so ,the right answer should be 20 disks?right?

    Read the article

  • At what point is asynchronous reading of disk I/O more efficient than synchronous?

    - by blesh
    Assuming there is some bit of code that reads files for multiple consumers, and the files are of any arbitrary size: At what size does it become more efficient to read the file asynchronously? Or to put it another way, how small must a file be for it to be faster just to read it synchronously? I've noticed (and perhaps I'm incorrect) that when reading very small files, it takes longer to read them asynchronously than synchronously (in particular with .NET). I'm assuming this has to do with set up time for things like I/O Completion Ports, threads, etc. Is there any rule of thumb to help out here? Or is it dependent on the system and the environment?

    Read the article

  • Ubuntu 13.XX unable to mount USB HDD. Tried everything. I/O error boot sector/file system

    - by XaviGG
    I know that there are many posts related but none of them helped me. I will jump to the last test because it is the one that should work, but it does not. An external HDD with single partition slow NTFS formatted in Windows, empty and clean. Checked for errors, it tells that not errors where found. Moving to Ubutnu 13.04... Gparted throws the first error when trying to read the disk: Input/output error It appears as unknown the content of the disk. Unable to create partition table or format it, getting the same error when trying. If I try to mount it in the terminal it tells me the same, specifying that also there is an I/O error reading the boot sector. I have this problem since I upgraded (always with fresh install) to 13.04. I thought it will be solved by the 13.10 but it has the same behavior. I tried with two different HDD (HD and SSHD) that work perfectly in Windows 7. In 13.04 at least I got a trying of mounting where the icon of the drive started showing and disappearing until finally it disappeared. But now it doesn't even try. Possible causes: -The HDD was my old main HDD, so it had WIN,RECOVERY,SYSTEM,UBU,SWAP partitions. Maybe the way or place where the partition table is defined is not the best for an external HDD but I don't know a lot in that topic. I would appreciate a lot if someone can give me a guideline to convert one of these HDD in a working external HDD. No files to recover, nothing to care about. Just format completely the disk and be able to use it for storing backups without having to move the files first to the windows partition, load windows and then copy them to the external HDD. Because I want to use a file comparator for the backups. Thanks a lot Edit 1: I found an option in Windows to convert it to a dynamic HDD that warns me that I wont be able to run O.S. after changing. I suppose that is what I need because in the current mode I cannot safely extract it. But it tells me an error that it couldn't change the mode.

    Read the article

  • Can I (reasonably) refuse to sign an NDA for pro bono work? [closed]

    - by kojiro
    A friend of mine (let's call him Joe) is working on a promising project, and has asked me for help. As a matter of friendship I agreed (orally) not to discuss the details, but now he has a potential investor who wants me to sign a non-disclosure agreement (NDA). Since thus far all my work has been pro bono I told Joe I am not comfortable putting myself under documented legal obligation without some kind of compensation for the risk. (It needn't be strictly financial. I would accept a small ownership stake or possibly even just guaranteed credits in the code and documentation.) Is my request reasonable, or am I just introducing unnecessary complexity?

    Read the article

  • Reading input all together or in steps?

    - by nischayn22
    For many programming quizzes we are given a bunch of input lines and we have to process each input , do some computation and output the result. My question is what is the best way to optimize the runtime of the solution ? Read all input, store it (in array or something) ,compute result for all of them, finally output it all together. or 2. Read one input, compute the result, output the result and so on for each input given.

    Read the article

  • Structuring an input file

    - by Ricardo
    I am in the process of structuring a small program to perform some hydraulic analysis of pipe flow. As I am envisioning this, the program will read an input file, store the input parameters in a suitable way, operate on them and finally output results. I am struggling with how to structure the input file in a sane way; that is, in a way that a human can write it easily and a machine can parse it easily. A sample input file made available to me for a similar program is just a stream of comma-separated numbers that don't make much sense on their own, so that's the scenario I am trying to avoid. Though I am giving the details of my particular problem, I am more interested in general input-file structuring strategies. Is a stream of comma-separated values my best bet? Would I be better off using some sort of key:value structure? I don't know much about this, so any help will probably put me in a better track than I am now.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >