Search Results

Search found 11338 results on 454 pages for 'big dave'.

Page 38/454 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Client/Server game even in solo: any big problem?

    - by Klaim
    I'm making a game which have strong basic design based on multiplayer but also should provide a really interesting and self-sufficient solo game. A bit like a real-time strategy game. The events and actions taken shouldn't be as massive and immediate as in a FPS, so you can also think the networking like for an RTS. It's a PC game, targetting Windows, MacOSX and Linux (Ubuntu & Fedora). It's programmed in C++, using a variety of open source libraries, so I have great (potential) control over the performances. So far I always considered that just making the game work with two applications, client & server, even in solo mode was ok. However, as I'm in the process of starting the network code I'm having doubts about if it's a good idea. I'm not a specialist so I might be missing something in my analysis. I see these pros and cons: Pros: The game works only one way so if I fix a bug it should apply on all game modes, whatever the distance with the server is; Basic networking issues would be detected early, including behaviour with the protection softwares (firewall) installed (i am not specialist so this might be wrong); Cons: I suppose that even if it should be really fast enough, networking client and server on the same computer would still be slower than no networking and message passing in (one) process memory. Maybe debugging would be more difficult? I don't have experience in this case but so far I assume that debugging with Visual Studio allows me to debug multiple process so it shouldn't be really different. Also, remote debugging. My question is: is there a big disadvantage that I missed? Or maybe there are advantages that I missed and that should encourage me to just continue with only client-server game sessions?

    Read the article

  • Javascript: Pass array as optional method args

    - by Dave Paroulek
    console.log takes a string and replaces tokens with values, for example: console.log("My name is %s, and I like %", 'Dave', 'Javascript') would print: My name is Dave, and I like Javascript I'd like to wrap this inside a method like so: function log(msg, values) { if(config.log == true){ console.log(msg, values); } } The 'values' arg might be a single value or several optional args. How can I accomplish this? If I call it like so: log("My name is %s, and I like %s", "Dave", "Javascript"); I get this (it doesn't recognize "Javascript" as a 3rd argument): My name is Dave, and I like %s If I call this: log("My name is %s, and I like %s", ["Dave", "Javascript"]); then it treats the second arg as an array (it doesn't expand to multiple args). What trick am I missing to get it to expand the optional args?

    Read the article

  • A big flat text file or a HTML site for language documentation?

    - by Bad Sector
    A project of mine is a small embeddable Tcl-like scripting language, LIL. While i'm mostly making it for my own use, i think it is interesting enough for others to use, so i want it to have a nice (but not very "wordy") documentation. So far i'm using a single flat readme.txt file. It explains the language's syntax, features, standard functions, how to use the C API, etc. Also it is easy to scan and read in almost every environment out there, from basic text-only terminals to full-fledged high-end graphical desktop environments. However, while i tried to keep things nicely formatted (as much as this is possible in plain text), i still think that being a big (and growing) wall of text, it isn't as easy on the eyes as it could be. Also i feel that sometimes i'm not writing as much as i want in order to avoid expanding the text too much. So i thought i could use another project of mine, QuHelp, which is basically a help site generator for sites like this one with a sidebar that provides a tree of topics/subtopics and offline full text search. With this i can use HTML to format the documentation and if i use QuHelp for some other project that uses LIL, i can import LIL's documentation as part of the other project's documentation. However converting the existing documentation to QuHelp/HTML isn't a small task, especially when it comes to functions (i'll need to put more detail on them than what currently exists in the readme.txt file). Also it loses the wide range of availability that it currently has (even if QuHelp's generated code degrades gracefully down to console-only web browsers, plain text is readable from everywhere, including from popular editors such as Vim and Emacs - i had someone once telling me that he likes LIL's documentation because it is readable without leaving his editor). So, my question is simply this: should i keep the documentation as it is now in the form of a single readme.txt file or should i convert it to something like the site i mentioned above? There is also the option to do both, but i'm not sure if i'll be able to always keep them in sync or if it is worth the effort. After asking around in IRC i've got mixed answers: some liked the wide availability of the single text file, others said that it is looks as bad as a man page (personally i don't mind that - i can read man pages just fine - but other people might have issues reading them). What do you think?

    Read the article

  • How to better copy&paste big files over RDP?

    - by WebMAOhist
    Recently I was making a few attempts to copy&paste a big (1.2 GB) file to remote computer over RDP. The remote computer is virtual testing machine with MS Windows Server 2008 Datacenter. First I tried to copy&paste before midnight when the transfer speed was limited by client computer ISP to 100 kB/s. So, it required a few hours and I was forced to cancel transfer since remote desktop became too unresponsive and sluggish (slow). So, I re-started it over midnight when my local transfer speed is over 4 GB/s 4MB/s (sorry for typo). So, my impression is that independently on speed (broadband) of copy&paste transfer the remote computer becomes sluggish while copying over RDP. At the same time downloading from internet doesn't make remote host sluggish. AFAIU, it is because clipboard of remote computer and so its memory becomes overloaded by transfer. How can I control (restrict) the usage of clipboard for specific process (pasting of file)? What are the possible way to control it? Update: After reading that slow speed of transfer is caused by encryption used for copy&pasting over RDP and since I believe I am more interested in overall efficiency: both the time, or rapidness, of getting file as well as possibility to work without waiting, I changed the question title from: How to control the usage of remote desktop clipboard usage for pasting a big file? to How to better copy&paste big files over RDP? For example, is it better to copy&paste one huge (zip) archive or unzip it and copy paste a folder with unzipped files? And more exactly I wanted to ask: What are possible ways to improve overall experience: the speed of transfer (i.e. availability of needed file) responsiveness of remote host (making remote coputer available for work before completion of copy&pasting)?

    Read the article

  • Exchange Server 2007 Forwarding Circles

    - by LorenVS
    Hello, I asked a question quite a while ago about two members of an organization who wanted to receive all of each other's emails, and yet maintain seperate mailboxes. (so all emails to mike@company get sent to mike and dave and all emails to dave@company get sent to mike and dave). At the time, I actually only needed to implement one side of this (only mikes emails got sent to both receipients) and (with the help of ServerFault) I set up forwarding on dave's inbox so that all of his emails would also be sent to mike. I'm now in a situation where I have to implement the other side of this relation (such that mike's emails will also forward to dave). I still remember how to set up the forwarding rule, but I'm worried that I might be creating a circular forwarding rule such that mike@compnay forwards to dave@company which forwards to mike@company and so on. Can anyone clear up my confusion (just want to make sure I don't make a stupid mistake). Thanks a ton

    Read the article

  • How to host multiple mail domains with courier?

    - by Dave Vogt
    I had my server set up with generic aliases in /etc/courier/aliases/users, which worked fine. But today I wanted to host some new domains, where addresses overlap. What I need is that [email protected] goes to account "dv", but [email protected] goes to account "d2". So I set up a new file containing fully qualified addresses ([email protected]: dv) instead of the previous dave: dv. But somehow, courier-smtpd doesn't accept mail to these addresses anymore. makealiases -dump prints all the aliases the way they should be.. so i'm a bit stuck.

    Read the article

  • Figure out what non-symlink path would be?

    - by David Mackintosh
    On Linux, if I've cd'd around and am now in a directory, is there a way to figure out what the real path to that directory is if I had not used a symbolic link to get there? Consider: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4/5 5 $ cd 5 $ pwd /home/dave/tmp/5 Or: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4 4 $ cd 4/5 $ pwd /home/dave/tmp/4/5 Is there any way to figure out that /home/dave/tmp/5 is really /home/dave/1/2/3/4/5 ?

    Read the article

  • What is a good way to comment if-else-clauses?

    - by acme
    Whenever I'm writing a typical if-else-construct in any language I wonder what would be the best way (in terms of readability and overview) to add comments to it. Especially when commenting the else clause the comments always feel out-of-place for me. Say we have a construct like this (examples are written down in PHP): if ($big == true) { bigMagic(); } else { smallMagic() } I could comment it like this: // check, what kind of magic should happen if ($big == true) { // do some big magic stuff bigMagic(); } else { // small magic is enough smallMagic() } or // check, what kind of magic should happen // do some big magic stuff if ($big == true) { bigMagic(); } // small magic is enough else { smallMagic() } or // check, what kind of magic should happen // if: do some big magic stuff // else: small magic is enough if ($big == true) { bigMagic(); } else { smallMagic() } What are your best-practice examples for commenting this?

    Read the article

  • Is really good have a big (500GB/1T) Hard Disk as main one!?

    - by aSeptik
    Hi All guys! ;-) i'm building my new PC so i'm starting the usual search for a good hardware/price around the net! but this time i'm not sure i want buy a big Hard Disk! i was thinking to have a "very small" HD like 50GB as main one and external (big) for store all other stuff! Assuming i'm using classic slow softwares like adobe suite (photoshop, flash, autodesk) and some very simple soft like notepad, php and so on...! do you think this is a good practice for improve performance/speed or i'm jast saying some stupid thing!?

    Read the article

  • Is really good have a big (500GB/1T) Hard Disk as main one!?

    - by aSeptik
    Hi All guys! ;-) i'm building my new PC so i'm starting the usual search for a good hardware/price around the net! but this time i'm not sure i want buy a big Hard Disk! i was thinking to have a "very small" HD like 50GB as main one and external (big) for store all other stuff! Assuming i'm using classic slow softwares like adobe suite (photoshop, flash, autodesk) and some very simple soft like notepad, php and so on...! do you think this is a good practice for improve performance/speed or i'm jast saying some stupid thing!?

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • How to troubleshoot if a zip file is valid or if it is big file size to be unzipped ?

    - by mireille raad
    Hello , I am trying to unzip a file with the size of 2GB I am getting the following error : unzip CLTE_C_08.zip Archive: CLTE_C_08.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of CLTE_C_08.zip or CLTE_C_08.zip.zip, and cannot find CLTE_C_08.zip.ZIP, period. After some googling, some people say that this error is because the file is too big, others say because file is corrupt, others say that it could be a not unix archive. So my question , how to find out if file is valid archive file on my Centos and what is the command/trick to uncompress big files ( if any ) Thanks in advance :)

    Read the article

  • How to Split a Big Postscript file (3000 pages) into one individual file per page (using Windows 7)?

    - by Pablo
    Hi, I'm having trouble doing the following: I have a big PDF file that I converted to postscript (for commercial printing). The resulting file is too big to be processed by the printer (machine). I've been trying to find a way to either: Convert from the original (many pages) PDF file to many Postscript file (one postcript file per PDF page in original PDF file(. Convert from PDF to PS (or even EPS). - I managed to do this Then split the PS file into a collection of smaller files. I've tried using Ghostscript, but it is all gibberish to me. Thanks. PS. If you have a good GS tutorial (for dummies?), please share the link.

    Read the article

  • Troubles with PyDev and external libraries in OS X

    - by Davide Gualano
    I've successfully installed the latest version of PyDev in my Eclipse (3.5.1) under OS X 10.6.3, with python 2.6.1 I have troubles in making the libraries I have installed work. For example, I'm trying to use the cx_Oracle library, which is perfectly working if called from the python interpeter of from simple scripts made with some text editor. But I cant make it work inside Eclipse: I have this small piece of code: import cx_Oracle conn = cx_Oracle.connect(CONN_STRING) sql = "select field from mytable" cursor = conn.cursor() cursor.execute(sql) for row in cursor: field = row[0] print field If I execute it from Eclipse, I get the following error: import cx_Oracle File "build/bdist.macosx-10.6-universal/egg/cx_Oracle.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/cx_Oracle.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so, 2): Library not loaded: /b/227/rdbms/lib/libclntsh.dylib.10.1 Referenced from: /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so Reason: no suitable image found. Did find: /Users/dave/lib/libclntsh.dylib.10.1: mach-o, but wrong architecture Same snippet works perfectly from the python shell I have configured the interpeter in Eclipse in preferences - PyDev -- Interpreter - Python, using the Auto Config option and selecting all the libs found. What am I doing wrong here? Edit: launching file /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so from the command line tells this: /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so: Mach-O universal binary with 3 architectures /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so (for architecture i386): Mach-O bundle i386 /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so (for architecture ppc7400): Mach-O bundle ppc /Users/dave/.python-eggs/cx_Oracle-5.0.3-py2.6-macosx-10.6-universal.egg-tmp/cx_Oracle.so (for architecture x86_64): Mach-O 64-bit bundle x86_64

    Read the article

  • Big Success for the 2012 edition of the Oracle EMEA Cloud CRM Partner Community Forum!

    - by Richard Lefebvre
    The 2012 edition of the Oracle EMEA Cloud Partner Community Forum took place on march 28&29 in Madrid. 100 participants from all over Europe had a chance to interact with the Oracle Cloud CRM Product Management team about multiple subjects such as Oracle Cloud CRM and Social Network solutions strategy, RightNow acquisition, Fusion CRM Business opportunities for partners, etc. During his opening keynote, Anthony Lye (Oracle Senior Vice-President and head of Oracle CRM) presented the current Fusion CRM business status, disclosed the overall Oracle CRM product strategy and responded to many questions from the audience. Later on that day, 8 Oracle ISV's presented their Oracle Cloud CRM add-on's and highlighted the value that System Integrators can benefit from as part of a Cloud CRM project. After a very friendly networking diner in a Spanish restaurant, the second day was dedicated to Fusion CRM, with a deeper dive into its major components (Sales, Sales Planning, Marketing) including the Fusion Composers. Briefings on Oracle Consulting Services Fusion CRM dedicated offerings and Fusion CRM Partner Programs concluded the day and the event. All participants rated the event as "good" to "Excellent" and mentioned that it was meaningful for them to plan their Oracle Cloud CRM based business in the near future. We look forward to organise a similar event next year and to welcome even more partners! Richard Lefebvre

    Read the article

  • Donald Ferguson says end-user programming is next big thing. Is it?

    - by Joris Meys
    You can guess how I came to ask this question... Anyway : http://www.bbc.co.uk/news/business-11944966 Donald Ferguson claiming that his websphere was his biggest disaster and proclaiming that end-user programming will be the way forward. This genuinely spurs the question : what with current programming languages. Honestly, I don't think that end-user programming will go much beyond a rather rigid template where you can build some apps around. If you see how many people actually manage to understand the basic functionality of functions in EXCEL... Plus, I fail to see how complex and performant systems can be built in such an end-user programming language ( Visual Basic, anyone?) Nice to play around with, but for many applications they're just not the thing. So no worries for the old languages if you ask me. What's your ideas?

    Read the article

  • How to efficiently preserve a really big navigation history in Firefox?

    - by brandizzi
    When I was using Mac, my Firefox stored items in its history for really long times. Sometimes I needed to find a link to a site I have seen two years ago and it found it! Also, the autocomplete in the Firefox bar is really great, so a long history and the autocompleting yield a wonderful feature to me. Unfortunately, it seems this does not happen in Ubuntu's Firefox. I looked for solutions but I just got some Firefox developers saying the option of expanding history is out for performance issues and one is well advised to not try to change it (which read to me as saying "we cannot make it work well so we limit the scope"). Anyway, my question is: is there a way of efficiently expand the size of Firefox history? Sorry for my bitterness, but a solution with strings attached (mostly say that I should not do it, like this addon) is not solution for me. Does someone have the same need of mine and found a solution?

    Read the article

  • Why do large IT projects tend to fail or have big cost/schedule overruns?

    - by Pratik
    I always read about large scale transformation or integration project that are total or almost total disaster. Even if they somehow manage to succeed the cost and schedule blow out is enormous. What is the real reason behind large projects being more prone to failure. Can agile be used in these sort of projects or traditional approach is still the best. One example from Australia is the Queensland Payroll project where they changed test success criteria to deliver the project. See some more failed projects in this SO question Have you got any personal experience to share?

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server Analysis Services to Hive

    - by pinaldave
    The SQL Server Analysis Service is a very interesting subject and I always have enjoyed learning about it. You can read my earlier article over here. Big Data is my new interest and I have been exploring it recently. During this weekend this blog post caught my attention and I enjoyed reading it. Big Data is the next big thing. The growth is predicted to be 60% per year till 2016. There is no single solution to the growing need of the big data available in the market right now as well there is no one solution in the business intelligence eco-system available as well. However, the need of the solution is ever increasing. I am personally Klout user. You can see my Klout profile over. I do understand what Klout is trying to achieve – a single place to measure the influence of the person. However, it works a bit mysteriously. There are plenty of social media available currently in the internet world. The biggest problem all the social media faces is that everybody opens an account but hardly people logs back in. To overcome this issue and have returned visitors Klout has come up with the system where visitors can give 5/10 K+ to other users in a particular area. Looking at all the activities Klout is doing it is indeed big consumer of the Big Data as well it is early adopter of the big data and Hadoop based system.  Klout has to 1 trillion rows of data to be analyzed as well have nearly thousand terabyte warehouse. Hive the language used for Big Data supports Ad-Hoc Queries using HiveQL there are always better solutions. The alternate solution would be using SQL Server Analysis Services (SSAS) along with HiveQL. As there is no direct method to achieve there are few common workarounds already in place. A new ODBC driver from Klout has broken through the limitation and SQL Server Relation Engine can be used as an intermediate stage before SSAS. In this white paper the same solutions have been discussed in the depth. The white paper discusses following important concepts. The Klout Big Data solution Big Data Analytics based on Analysis Services Hadoop/Hive and Analysis Services integration Limitations of direct connectivity Pass-through queries to linked servers Best practices and lessons learned This white paper discussed all the important concepts which have enabled Klout to go go to the next level with all the offerings as well helped efficiency by offering a few out of the box solutions. I personally enjoy reading this white paper and I encourage all of you to do so. SQL Server Analysis Services to Hive Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Publish/Subscribe/Request for exchange of big, complex, and confidential data?

    - by Morten
    I am working on a project where a website needs to exchange complex and confidential (and thus encrypted) data with other systems. The data includes personal information, technical drawings, public documents etc. We would prefer to avoid the Request-Reply pattern to the dependent systems (and there are a LOT of them), as that would create an awful lot of empty traffic. On the other hand, I am not sure that a pure Publisher/Subscriber pattern would be apropriate -- mainly because of the complex and bulky nature of the data to be exchanged. For that reason we have discussed the possibility of a "publish/subscribe/request" solution. The Publish/Subscribe part would be to publish a message to the dependent systems, that something is ready for pickup. The actual content is then picked up by old-school Request-Reply action. How does this sound to you?? Regards, Morten

    Read the article

  • Save BIG on Storage &mdash; with Oracle Advanced Compression

    - by [email protected]
    Recently, we published a podcast revealing just how much Oracle benefits from its internal use of Oracle Database 11g and Advanced Compression. With hundreds of TB and millions of dollars saved, Oracle Advanced Compression is dramatically reducing storage costs and substantially improving efficiency across the company. Now, here's your chance: Meet the experts, have your questions answered by them and immediately start using your storage more efficiently: On April 14th, join me for a live Webcast with Oracle's Tim Shetler, Vice President of Product Management and Bill Hodak, Principal Product Manager, to learn just how Oracle Advanced Compression can Reduce disk space requirements for all types of data Improve query and storage performance Lower storage costs throughout the datacenter Register here! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • SPD 2013 Missing design view, is it really such a big deal?

    - by Sahil Malik
    SharePoint 2010 Training: more information First, please read what my friend Marc Anderson has to say about it. More, and More and even more. Also, my friend Asif Rehmani has some views on it as well. They bring up some important points, but in short, everything that allows for Visual Editing of pages, is gone! And anything that concerned visual editing, (and there are many such scenarios), is now gone. What is the practical upshot of all this? Read full article ....

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >