Search Results

Search found 1426 results on 58 pages for 'jay silly evarlast wren'.

Page 34/58 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Need help with cybersquatting complaint: can a domain name forward AND resolve at same time? [on hold]

    - by Alan
    Probably a silly question for you pros... but for this novice here, I just want to make sure my understanding is correct. Context: I am trying to prove that a domain name owner has been cybersquatting and has never used the domain name in question. There are 4 shots from WayBackMachine over a three-year period that show the domain name resolving to a basic server index page with either no files or a single cgi-bin folder. The domain name owner claims, however, that the domain name was forwarded over the entire time from to another website, and that these captures probably coincided with occasional "outages." It is my understanding that: a) domain name forwarding is binary: if a domain name is forwarded to a valid site, it cannot simultaneously resolve to a valid IP address. Is this correct? b) domain name forwarding is not subject to "outages": servers can have outages, and websites can be down, but the forwarding itself cannot be down, as this is simply a pointer. (Or, the entire registrar where the DNS settings are hosted would have to malfunction. Is this correct? FINALLY, bonus question for pro webmasters: What is the likelihood that the WayBackMachine would capture the domain name on just those occasions when the webmaster disabled forwarding to supposedly work on the new site? Mucho thanks in advance!

    Read the article

  • Why, in WPF, do we set an object to Stretch via its Alignment properties instead of Width/Height?

    - by Jonathan Hobbs
    In WPF's XAML, we can tell an element to fill its container like this: <Button HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> Why is it that when we set an element to Stretch, we do it via the HorizontalAlignment and VerticalAlignment properties? Why did the WPF design team decide to take this approach over having Width="Stretch" and Height="Stretch"? I presume it was a calculated decision, and I'm curious about the reasoning. CSS, among other technologies, follows the convention that stretching is done via the width and height properties, and that alignment affects positioning exclusively. This seems intuitive enough: stretching the element is manipulating its width and height, after all! Using the corresponding alignment property to stretch an element seems counter-intuitive and unusual in comparison. This makes me think they didn't just pick this option for no reason: they made a calculated decision and had reasons behind it. Width and Height use the double data type, which would ordinarily mean assigning it a string would be silly. However, WPF's Window objects can take Width="Auto", which gets treated as double.NaN. Couldn't Width="Stretch" be stored as double.PositiveInfinity or some other value?

    Read the article

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Should the syntax for disabling code differ from that of normal comments?

    - by deltreme
    For several reasons during development I sometimes comment out code. As I am chaotic and sometimes in a hurry, some of these make it to source control. I also use comments to clarify blocks of code. For instance: MyClass MyFunction() { (...) // return null; // TODO: dummy for now return obj; } Even though it "works" and alot of people do it this way, it annoys me that you cannot automatically distinguish commented-out code from "real" comments that clarify code: it adds noise when trying to read code you cannot search for commented-out code for for instance an on-commit hook in source control. Some languages support multiple single-line comment styles - for instance in PHP you can either use // or # for a single-line comment - and developers can agree on using one of these for commented-out code: # return null; // TODO: dummy for now return obj; Other languages - like C# which I am using today - have one style for single-line comments (right? I wish I was wrong). I have also seen examples of "commenting-out" code using compiler directives, which is great for large blocks of code, but a bit overkill for single lines as two new lines are required for the directive: #if compile_commented_out return null; // TODO: dummy for now #endif return obj; So as commenting-out code happens in every(?) language, shouldn't "disabled code" get its own syntax in language specifications? Are the pro's (separation of comments / disabled code, editors / source control acting on them) good enough and the cons ("shouldn't do commenting-out anyway", not a functional part of a language, potential IDE lag (thanks Thomas)) worth sacrificing? Edit I realise the example I used is silly; the dummy code could easily be removed as it is replaced by the actual code.

    Read the article

  • Prepared statement alternatives for this middle-man program?

    - by user2813274
    I have an program that is using a prepared statement to connect and write to a database working nicely, and now need to create a middle-man program to insert between this program and the database. This middle-man program will actually write to multiple databases and handle any errors and connection issues. I would like advice as to how to replicate the prepared statements such as to create minimal impact to the existing program, however I am not sure where to start. I have thought about creating a "SQL statement class" that mimics the prepared statement, only that seems silly. The existing program is in Java, although it's going to be networked anyways so I would be open to writing it in just about anything that would make sense. The databases are currently MySQL, although I would like to be open to changing the database type in the future. My main question is what should the interface for this program look like, and does doing this even make sense? A distributed DB would be the ideal solution, but they seem overly complex and expensive for my needs. I am hoping to replicate the main functionality of a distributed DB via this middle-man. I am not too familiar with sql-based servers distributing data (or database in general...) - perhaps I am fighting an uphill battle by trying to solve it via programming, but I would like to make an attempt at least.

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • How should I load level data in java?

    - by Matthew G.
    I'm setting up my engine for a certain action/arcade game to have a set of commands that would look something like this. Set landscape to grass Create rocks at ... Create player at X, Y Set goal to "Get to point X Y" Spawn enemy at X, Y I'd then have each object knowing what it has to do, and acting on its own. I've been thinking about how to store this data. External data files could be parsed by a level class, and certain objects can be spawned through that. I could also create a base level class and extend it for each level, but that'd create a large amount of classes. Another idea is to have one level parser class, but have a case for each level. This would be extremely silly and bulky, but I mention it because I found that I did this at 2 AM last night. I'm finally getting why I have to plan out my inheritances, though. RIP project. I might be completely missing another option.

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • What should I do when my team leader is unfair for no reason? [closed]

    - by crucified soul
    I'm a new software developer and this is my first job. It's a startup and the CEO and the working environment is just great. I work really hard and I believe that I also do my job well. But recently, I have felt like my team leader is being unfair to me for no reason. It appears that he is nice to my co-workers, but not me. I figure he is mad at me, but I didn't bother to find out why. I really love this company and I really love working there. But if my team leader continues to be unfair then I have no option other than leaving. How can I fix this? EDIT: The other day he called me into his office and wanted to see my work in the afternoon (Yes, in my country, at summer season after 5PM is afternoon. My office begins at 8AM. And I'm not saying I've problems to work after 5PM). At the time I was facing a weird runtime error and I was pretty tired. I explained the situation to him. Then he found a small logical error in my code and asked me why I didn't fix this. I told him I was trying to resolve this runtime error and that I was sure that this logical error had nothing to do with the runtime error. He then proceeded to yell at me. After fixing the logical error that runtime error was still there. This is not the only occasion he has been unfair to me. I'm saying is being unfair because he doesn't do this kind of thing to other developers when they do really silly mistakes.

    Read the article

  • Raspberry Pi, Time Capsule Progress

    - by Richard Jones
    So by way of an update. I thought all was good with my Raspberry Pi, Debian and Netatalk Apple Time Capsule Clone. However something very strange going on. Although I could backup my Mac's + PC's fine to Raspberry Pi with external USB HD; strangely with RPI running, I couldn't use AirPlay. I found myself unable to play anything from Mac to Apple TV. So after lots of trying to make this work, I about turned and finally went out and got myself a 2TB Apple Time Capsule. More cash than I would want to spend on anything like this, but Apple you got me. I would like to offer a top tip, which maybe goes a small way to justifying silly expenditure... You can easily add a USB HD to any Time Capsule. I've just added a 3 TB external USB HD, giving me a 5 TB of total backup grunt. 3 TB External USB HD, was peanuts by comparison to Apple kit. So all working, its all solid as you'd expect.Apple 2, maybe me .5. But strong, solid backups now happening, without hassle (but a bit of a credit card bill to follow)

    Read the article

  • My Only Gripe With Programming

    - by David Espejo
    Is that im having trouble practicing problems. Even if I decide to practice the problems from my C++ book, they dont give any idea of the way the solution(program) should look like, so that I may compare to see if my program is similar in anyway. My book gives me to many generic "Write a program to do "this" " projects without really showing a concrete example of what "this" really is. In other words How Do I Know That I did "that". One problem in my book said to write a program that calculates the sales tax on a given item????? First of all slase tax differs on state(whats the state,) whats the item(a house, a dog,) How can I check this to see if im right. Programming books dont have answer keys! I know that there is no ABSOLUTE answer, thats just silly, programs can be written in many ways, but a sample of what one would look like based of the difficulty of the problem would really help! Is there a solution to this, maby a book that has worked out examples for the problems they give , or online sources that do something similar.(is there such thing as a programming book with an answer key?)

    Read the article

  • Thoughts (on Windows Phone 7) from the MVP Summit

    - by Chris Williams
    Last week I packed off to Redmond, WA for my annual pilgrimage to Microsoft's MVP Summit. I'll spare you all the silly taunting about knowing stuff I can't talk about, etc... and just get to the point. I'm a XNA/DirectX MVP, an ASP Insider and a Languages (VB) Insider... so I actually had access to a pretty broad spectrum of information over the last week. Most of my time was focused on Windows Phone 7 related sessions, and while I can't dig deep into specifics, I can say that Microsoft is definitely not out of the fight for Mobile. The things I saw tell me that Microsoft is listening and paying attention to feedback, looking at what works & what doesn't and they are working their collective asses off to close the gap between Google and Apple. Anyone who has been in this industry for a while can tell you Microsoft does their best work when they are the underdog. They are currently behind, and have a lot of work ahead of them, but this is when they bring all their resources together to solve a problem. After the week I spent in Redmond, and the feedback I heard from other MVPs, and the technological previews I saw... I feel confident in betting heavily on Microsoft to pull this off.

    Read the article

  • Does LINQ require significantly more processing cycles and memory than lower-level data iteration techniques?

    - by Matthew Patrick Cashatt
    Background I am recently in the process of enduring grueling tech interviews for positions that use the .NET stack, some of which include silly questions like this one, and some questions that are more valid. I recently came across an issue that may be valid but I want to check with the community here to be sure. When asked by an interviewer how I would count the frequency of words in a text document and rank the results, I answered that I would Use a stream object put the text file in memory as a string. Split the string into an array on spaces while ignoring punctuation. Use LINQ against the array to .GroupBy() and .Count(), then OrderBy() said count. I got this answer wrong for two reasons: Streaming an entire text file into memory could be disasterous. What if it was an entire encyclopedia? Instead I should stream one block at a time and begin building a hash table. LINQ is too expensive and requires too many processing cycles. I should have built a hash table instead and, for each iteration, only added a word to the hash table if it didn't otherwise exist and then increment it's count. The first reason seems, well, reasonable. But the second gives me more pause. I thought that one of the selling points of LINQ is that it simply abstracts away lower-level operations like hash tables but that, under the veil, it is still the same implementation. Question Aside from a few additional processing cycles to call any abstracted methods, does LINQ require significantly more processing cycles to accomplish a given data iteration task than a lower-level task (such as building a hash table) would?

    Read the article

  • Dealing with the node callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Is file permission secured when it transferred from Ubuntu to Windows?

    - by Gaurav_Java
    I am having 9GB text file which is encrypted . This file contains some confidential data . Which is on my system(Ubuntu) and my external HDD (ntfs) . This file get daily updated and then encrypted . But it has to be shared among 2-3 (Windows) person. I defined permission so that no other person can even read this file(chmod 660). It is too large file, so I can't upload it anywhere and it get updated daily basis. But this file travel on Windows OS and Ubuntu also. Even I am having copy of this on my personal computer. Recently it was deleted by some other user over Windows . I just want to know how can I set permission over that file so that it cannot be deleted from any other operating system. If someone delete this file, then I am having data old for couple of days, which is only on my system. I gone through this question it says there is nothing. And from this question I am not able to understand how can I protect it. Can I do anything for preventing this file from being deleted. Then how can I secure this files from getting deleted any suggestion or software or ideas. Maybe I sound silly or this is stupid question. Please don't close it, thanks for any suggestion or solution.

    Read the article

  • help with migrating from Widows, x64 FGLRX, CPU load, Java and Minecraft

    - by joxer
    Im new to ubuntu, it is the second time i have installed it. This comp is Dell studio 1558. some specs: CPU- intel core i7 Q720 1.6GHz, GPU- ATI Mobility Radeon HD 5400 FGLRX- i've fallowed these instructions among inspecting many others, i have tried all of the variants mentioned in that tread before reverting back to the drivers supplied with Ubuntu ( through additional drivers ) which apparently seem to work best. i am testing them with minecraft as silly as it may sound. in 2 to 60 minutes the FPS drop from 70+ to somewhere between 0 and 5. while "fgl_glxgears" runs at between 400 and 800 FPS smoothly.. I am using oracle ( sun ) JRE6 to run minecraft, i have gotten it through a tutorial linked on oracle's website, i currently have no other version of java installed ( was worse when i had a few others here ). after closing the game Ubuntu is similarly slow, i've checked the CPU load using System Monitor and it shows one of the CPU's jumping to 80%~100% load at a time.. a reboot solves it. i realize my mess is up to me to solve but a hand is always appreciated. tyvm in advance.

    Read the article

  • Strange resolution on Ubuntu 11.10

    - by FSchmidt
    I only just installed Ubuntu 11.10, so excuse me if this question is silly ;-) I have a Fujitsu Esprimo Mobile V655, nvidia 8200 graphics, and recently installed Ubuntu 11.10 using wubi. I had to modify booting commands to include nomodeset; otherwise Ubuntu would not boot. Now I did set my screen resolution to 1280 x 720, which is the correct resolution for this screen. Still, the display seems imperfect. The font sizes seem unnatural(too large / stretched) and text is quite blurry (especially in Firefox). Could it have something to do withe the nvidia graphics driver and/or the nomodeset parameter? How can I fix this? Update: I used jockey-gtk to update nvidia to the current version. This improved the resolution dramatically (no blurriness, fonts are good). It also means that I no loger need to include nomodesetin the boot commands. However, other problems were brought up by this. It seems that certain files cannot be accessed - icon (images) are missing, some task bars are completely unstyled (grey, block-form, win97 style). I also get this error message(roughly translated from German, so may be slightly different from actual) every time I reboot: Could not apply the stored configuration for the monitor: none of the chosen modi is compatible with the available modi. I have tried nvidia-xconfig, unity --reset , no improvements. Can anyone help, please?

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Login takes very long, annoying repaints once a minute when logged in: How to troubleshoot?

    - by user946850
    I am suffering from a strange problem with my Gnome Shell in Ubuntu 12.10. The login takes very long ( 30 sec), with a blank screen. In Google Chrome and Thunderbird (and perhaps in other applications), the main window freezes and is repainted in periodic intervals of less than one minute. The freeze takes several seconds, and it seems that font and appearance of, e.g., tabs and buttons briefly changes. Attempting to enable the second monitor show an error message related to XRANDR. Everything seems to have started three days ago, after I had to force-shutdown the machine while it was hibernating due to low power. (It was hibernating for quite a while and didn't want to stop.) Silly me. I have tried the following measures, with no avail: Checked all package file md5 hashes using debsums Reinstalled all packages using a variant of dpkg --get-selection \* | xargs apt-get install -reinstall Temporarily moved configuration directories such as .gconf, .config and .gnome2 to another location Created a new user account When I choose "Ubuntu" during login, the problems disappear. I am sort of frustrated that reinstalling all packages didn't fix the issue. How to troubleshoot this Gnome Shell (?) problem, short of reinstalling the system? (Or did anyone see this kind of behavior on their machine?)

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • How do I interpolate air drag with a variable time step?

    - by Valentin Krummenacher
    So I have a little game which works with small steps, however those steps vary in time, so for example I sometimes have 10 Steps/second and then I have 20 Steps/second. This changes automatically depending on how many steps the user's computer can take. To avoid inaccurate positioning of the game's player object I use y=v0*dt+g*dt^2/2 to determine my objects y-position, where dt is the time since the last step, v0 is the velocity of my object in the beginning of my step and g is the gravity. To calculate the velocity in the end of a step I use v=v0+g*dt what also gives me correct results, independent of whether I use 2 steps with a dt of for example 20ms or one step with a dt of 40ms. Now I would like to introduce air drag. For simplicity's sake I use a=k*v^2 where a is the air drag's acceleration (I am aware that it would usually result in a force, but since I assume 1kg for my object's mass the force is the same as the resulting acceleration), k is a constant (in this case I'm using 0.001) and v is the speed. Now in an infinitely small time interval a is k multiplied by the velocity in this small time interval powered by 2. The problem is that v in the next time interval would depend on the drag of the last which again depends on the v of the last interval and so on... In other words: If I use a=k*v^2 I get different results for my position/velocity when I use 2 steps of 20ms than when I use one step of 40ms. I used to have this problem for my position too, but adding +g*dt^2/2 to the formula for my position fixed the problem since it takes into account that the position depends on the velocity which changes slightly in every infinitely small time interval. Does something like that exist for air drag too? And no, I dont mean anything like Adding air drag to a golf ball trajectory equation or similar, for that kind of method only gives correct results when all my steps are the same. (I hope you can understand my intermediate english, it's not my main language so I would like to say sorry for all the silly mistakes I might have made in my question)

    Read the article

  • Getting into the details of game engine programming

    - by Darkslash
    I am interested in learning game programming, but I really have an interest in the lower level engineering in games. I have OpenGL experience, and I am really interested in learning more about implementing AI, Physics, etc. I have a computer science degree, so I really like getting into technical stuff. Many times when I ask about this sort of thing, I get a lot of "Use an engine", "Use Unity3d", "Why waste your time writing code that already exists", etc, etc. My idea was to use simpler libraries such as SFML or XNA so that I could learn how to implement the more complex systems. The thing is, although I do want to write games, I want to learn things that using something like Unity simply doesn't teach you. My goal is not to make a current generation quality 3D game to sell, I just want to make some cool smaller games and learn all I can about the programming side of game development. Is this something that people just do not do anymore? It seems like everywhere I turn people are using Unity or UDK or GameMaker. I fully understand why you would use a tool like these, but I cant see how they would suit my purposes. So where does someone like myself turn? Am I trying to learn something that people just do not bother doing anymore? Is the innovation in this area gone and just all about gameplay now? I'm sorry if this question seems silly, but I am genuinely interested in knowing more about this and meeting more people who are interested in this sort of thing.

    Read the article

  • Where is the best place to teach myself a language, and which one?

    - by Lorinda
    Hello, I do not know any programming languages at all. I will self teach myself and need to know the best place to do so where I can learn from a most basic level. Where is a great place to begin learning a language? What language is best to learn first? Is it silly to learn Ruby first? Here, I came across someone saying that learning some of the higher languages can make you 'lazy' if you learn them first. Like Ruby amongst others. For my first language, my husband is advising me to learn Ruby (for his own personal interests). However, I need some independent advice of how to get started and what language I should learn first. I will eventually learn Ruby and then Rails. Four months ago, my husband ordered a text of objective C because he thought he would take it on. I flipped through and it was clearly starting at a place more advanced than where I am coming from. I have dabbled with a Ruby tutorial and I don't get it. I get what I am putting in is what I get, but I don't understand what is leading up to that. I need to know ALL the rules first. I then looked up computer languages and stared researching binary code which helped a lot, but not where I want to start. I don't have a lot of time right now in my life (with four kids) to go back that far. If I were going to school, that would be different. Any advice you could give is most welcomed.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >