Search Results

Search found 20799 results on 832 pages for 'long integer'.

Page 457/832 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • How can I change ACLs recursively using cacls.exe?

    - by maaartinus
    I want to restrict the access for everything inside the work directory to me and the system only. I tried this with the following command: cacls.exe work /t /p 'PIXLA09\Maaartin:f' 'NT AUTHORITY\SYSTEM':f However it doesn't work at all. The following command should show only the two specified users but instead shows a very long list of permissions: cacls.exe work/somedirectory I tried to use /g instead of /p, too. Since I didn't use /e the permissions shouldn't get edited but replaced. Any ideas what's wrong?

    Read the article

  • Advanced (?) Excel sorting

    - by Preston Grayskull
    First of all, I'd like to admit that I don't really know anything about Excel, but I have tried to look up a solution to this in Excel books and Googling. Here's what I'm trying to do: I have a really long spreadsheet There are 7 columns total, but only two columns that I'm most interested in. Here's an example CSV that is much more simple than my actual dataset, but the search/sort is analogous: John, Apple Dave, Apple Dave, Orange Steve, Apple Steve, Orange Steve, Kiwi Bob, Apple Bob, Banana I'm interested in extracting the entire rows (all of the columns) that meet the following criteria: ["Apple"] OR ["Apple" and "Orange"] NOT ["Apple" and "Orange" and Anything Else] NOT ["Apple" and Anything that isn't Orange] So with the above CSV, I would get the entire rows for John and Dave, but not Steve and not Bob. I started doing this manually, and will likely finish by the time this question has an answer, but I would like to know this for future reference. Thanks!

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

  • How to convince a client to switch to a framework *now*; also examples of great, large-scale php applications.

    - by cbrandolino
    Hi everybody. I'm about to start working on a very ambitious project that, in my opinion, has some great potential for what concerns the basic concept and the implementation ideas (implementation as in how this ideas will be implemented, not as in programming). The state of the code right now is unluckily subpar. It's vanilla php, no framework, no separation between application and visualization logic. It's been done mostly by amateur students (I know great amateur/student programmers, don't get me wrong: this was not the case though). The clients are really great, and they know the system won't scale and needs a redesign. The problem is, they would like to launch a beta ASAP and then think of rebuilding. Since just the basic functionalities are present now, I suggested it would be a great idea if we (we're a three-people shop, all very proficient) ported that code to some framework (we like CodeIgniter) before launching. We would reasonably be able to do that in < 10 days. Problem is, they don't think php would be a valid long-term solution anyway, so they would prefer to just let it be and fix the bugs for now (there's quite a bit) and then directly switch to some ruby/python based system. Porting to CI now will make future improvements incredibly easier, the current code more secure, changing the style - still being discussed with the designers - a breeze (reminder: there are database calls in template files right now); the biggest obstacle is the lack of trust in php as a valid, scalable technology. So well, I need some examples of great php applications (apart from facebook) and some suggestions on how to try to convince them to port soon. Again, they're great people - it's not like they would like ruby cause it's so hot right now; they just don't trust php since us cool programmers like bashing it, I suppose, but I'm sure going on like this for even one more day would be a mistake. Also, we have some weight in the decision process.

    Read the article

  • Is there a way to get att.net email to stay connected?

    - by Clay Shannon
    My att.net account at home (wireless connection) has been bad for the last several days: I have to hit F5 quite a few times to "unfreeze" it (I can read an email or two, then it freezes, etc.). At work (company LAN) it's even worse: I can connect to the site and see that I have email, but can't open any of the emails - and the screen constantly refreshes (every couple of seconds) with a "Connecting..." message. It apparently connects and disconnects over and over again, but never stays connected long enough to actually access the email. Is there a way either to fix this OR forward my att.net (from home) to my work email address (accessible via MS Outlook)? Or set it up from work using Outlook to pull in my att.net email? I have Outlook 2003 at work.

    Read the article

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • SMTP server to deliver mail to Rails app, how?

    - by Gunchars
    all, this is my first question and I hope I chose the right place to post it. Here's what I need help with: I've been looking for this all day and I'm having a hard time finding a SMTP mail server that would fit the following criteria: lightweight, does one thing and does it good is able to route and deliver local mail to a Rails application The second point could be accomplished in any number of ways. I'm running a VPS, so I have full freedom in how to implement this. It could, for example, put messages straight in the db, pipe them to a helper program that would then process them accordingly or also save messages in a mbox file and run a script after every received message. I'm building a small site so the traffic is not going to be a problem. If there are alternative ways to deliver messages to a Rails app, I'd gladly hear about them. Thank you. EDIT: After long searching, I think I've found what I was looking for. Exim is a mail server that can deliver local mail to pipes. Also, Rails 3 and ActionMailer can make it really easy to process the incoming mail. More info here: http://www.exim.org/exim-html-current/doc/html/spec_html/ch29.html http://guides.rubyonrails.org/action_mailer_basics.html#receiving-emails

    Read the article

  • Advice on choosing a book to read

    - by Kioshiki
    I would like to ask for some recommendations on useful books to read. Initially I had intended on posting quite a long description of my current issue and asking for advice. But I realised that I didn’t have a clear idea of what I wanted to ask. One thing that is clear to me is that my knowledge in various areas needs improving and reading is one method of doing that. Though choosing the right book to read seems like a task in itself when there are so many books out there. I am a programmer but I also deal with analysis, design & testing. So I am not sure what type of book to read. One option might be to work through two books at the same time. I had thought maybe one about design or practices and another of a more technical focus. Recently I came across one book that I thought might be useful to read: http://xunitpatterns.com/index.html It seems like an interesting book, but the comments I read on amazon.co.uk show that the book is probably longer than it needs to be. Has anyone read it and can comment on this? Another book that I already own and would probably be a good one to finish reading is this: http://www.amazon.co.uk/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1309438553&sr=8-1 Has anyone else read this who can comment on its usefulness? Beyond these two I currently have no clear idea of what to read. I have thought about reading a book related to OO design or the GOF design patterns. But I wonder if I am worrying too much about the process and practices and not focusing on the actual work. I would be very grateful for any suggestions or comments. Many Thanks, Kioshiki

    Read the article

  • Firefox, Thunderbird, Safari - hangs up on start

    - by takeshin
    I have a strange problem on Windows Vista (Automatic updates). Long time ago I had installed Firefox, Thunderbird, Safari and Opera and they all worked well. Once upon a time, after Windows restart, Firefox, Thunderbird and Safari won't start. [Opera (USB) works fine]. When I start the browser, program name is listed on the processes list, but it is not activated, doesn't show any window. I tried crating new profiles with -ProfileManager, restarting Windows, reinstalling the browsers. I scanned the system for suspected programs and it looks clean. There is 45GB of free HDD space. WTF?

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • Graduating soon with a computer science degree, but have unique circumstances [closed]

    - by Donnie
    I joined the Navy in 1998, and was admitted into Nuclear Power Training. I got my electrician's mate certificate, but was put on medical hold when I was in Nuclear Power Training. I was sent to the Naval Hospital, and received a medical (honorable) discharge in the middle of 2000. I decided to stay at home and raise my son, and my girlfriend worked. a few years ago, I decided that I want to work as a programmer, so I went to college and will soon be graduating with a degree in computer science. I hope to finish with a relatively high GPA, 3.8 or 3.9. My question is this: How much, if any, of my Navy experience should I put on my resume? And how do I explain my nine year gap as a stay at home dad? Do I even try to explain it? I know recent college graduates typically have no experience, but obviously I'm not the typical college graduate. Will my long absence from working, or my relatively short duration in the Navy hurt my chances? Should I just put the college on my resume, and hope that HR thinks I'm younger than I am? Obviously, then, my age would show at the interview and there would be questions. Any help is appreciated.

    Read the article

  • Running "ipconfig /displaydns" in cmd prompt still displays results even after I run "ipconfig /flushdns"

    - by 400_THE_CAT
    Whenever I run "ipconfig /displaydns" I get a long list of sites even after running "/flushdns." I thought my results should be empty considering /flushdns. Is this normal behavior? I also noticed that after I run /flushdns and browse the internet for a couple of hours, my list of cached sites doesn't really change. Google.com, for example isn't on the list but a bunch of sites I've never visited do show up in my DNS cache. Can someone explain this?

    Read the article

  • backing up ntfs disk using rsync on ubuntu

    - by user70366
    For a long time I was using windows. I have a separate drive I use to keep copies of my media files, photos etc. on, which I periodically backup to an external drive. In Windows I used SyncToy to do this. After my Windows stopped booting, I decided to switch to Linux (Ubuntu 10.10). That seems to be going fine, but now I want to backup my drive to the external drive like before. Mostly the two drives will be already the same with maybe about 10GB of extra files added. So I try to use rsync to synchronise the two drives like this: rsync --dry-run -rvlt --modify-window=1 /media/Antonio1TB/Backup /media/FREECOM\ HDD/Backup The problem is the dry run indicates that every file on the drive will be copied. Not just the files I have recently added. What is the correct command to synch two NTFS drives under Ubuntu so that files that already exist don't get copied again? Thanks.

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Weblogic Threads Usage

    - by Hila
    I have an application deployed on WebLogic 10.3, which exhibits a strange behavior. I am running a constant (not too high) load on my application (20 concurrent users, running a light activity). The response time is reasonable (well below 100ms after the application stabilizes) Memory consumption seems fine (My application creates a lot of short-living objects, but they are garbaged collected so the overall memory consumption stays under 500 mb). Threads stats seem healthy as well: And yet, after I leave my test running for a while, more and more execute threads ("[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'") are created, until eventually the application crashes: This test hasn't been running for a long time (All the new threads that you don't see in the first screenshot were created while I was writing this question), and I've seen much more threads being created. Any idea why these threads are being created?

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • Questions before I revamp my rendering engine to use shaders (GLSL)

    - by stephelton
    I've written a fairly robust rendering engine using OpenGL ES 1.1 (fixed-function.) I've been looking into revamping the engine to use OpenGL ES 2.0, which necessitates that I use shaders. I've been absorbing information all day long and still have some questions. Firstly, lighting. The fixed-function pipeline is guaranteed to have at least 8 lights available. My current engine finds lights that are "close" to the primitives being drawn and enables them; I don't know how many lights are going to be enabled until I draw a given model. Nothing is dynamically allocated in GLSL, so I have to define in a shader some number of lights to be used, right? So if I want to stick with 8, should I write my general purpose shader to have 8 lights and then use uniforms to tell it how many / which lights to use? Which brings me to another question: should I be concerned with the amount of data I'm allocating in a shader? Recent video cards have hundreds of "stream processors." If I've got a fragment shader being used on some number of fragments in a given triangle, I assume they must each have their own stack to work on. Are read-only variables copied here, or read when needed? My initial goal is to rework my code so that it is virtually identical to the current implementation. What I have in mind is to create my own matrix stack so that I can implement something along the lines of push/popMatrix and apply all my translations, rotations, and scales to this matrix, then provide the matrix to the vertex shader so that it can make very quick vertex translations. Is this approach sound? Edit: My original intention was to ask if there was a tutorial that would explain the bare minimum necessary to jump from fixed-function to using shaders. Thanks!

    Read the article

  • Code Clone Analysis on Rawr &ndash; Part 1

    - by Dylan Smith
    In this post we’ll take a look at the first result from the Code Clone Analysis, and do some refactoring to eliminate the duplication.  The first result indicated that it found an exact match repeated 14 times across the solution, with 18 lines of duplicated code in each of the 14 blocks.   Net Lines Of Code Deleted: 179     In this case the code in question was a bunch of classes representing the various Bosses.  Every Boss class has a constructor that initializes a whole bunch of properties of that boss, however, for most bosses a lot of these are simply set to 0’s.     Every Boss class inherits from the class MultiDiffBoss, so I simply moved all the initialization of the various properties to the base class constructor, and left it up to the Boss subclasses to only set those that are different than the default values. In this case there are actually 22 Boss subclasses, however, due to some inconsistencies in the code structure Code Clone only identified 14 of them as identical blocks.  Since I was in there refactoring the 14 identified already, it was pretty straightforward to identify the other 8 subclasses that had the same duplicated behavior and refactor those also.   Note: Code Clone Analysis is pretty slow right now.  It takes approx 1 min to build this solution, but it takes 9 mins to run Code Clone Analysis.  Personally, if the results are high quality I’m OK with it taking a long time to run since I don’t expect it’s something I would be running all that often.  However, it would be nice to be able to run it as part of a nightly build, but at this time I don’t believe it’s possible to run outside of Visual Studio due to a dependency on the meta-data available in the VS environment.

    Read the article

  • quick check of open port

    - by shantanuo
    The following is working as expected. (do not want to use nmap) I need to use nc (or any other built-in centOS) command in shell script to check the port 6379 of a remote server. I want the script to exit quickly if no response received in less than 1 second. But it seems that nc will wait for too long before quitting with exit code of 1 How do I "quickly" check if the port is listening? # time nc -z 1.2.3.4 1234 real 0m21.001s user 0m0.000s sys 0m0.000s # echo $? 1 # time nc -z 1.2.3.4 6379 Connection to 1.2.3.4 6379 port [tcp/*] succeeded! real 0m0.272s user 0m0.000s sys 0m0.008s # echo $? 0

    Read the article

  • Weird windows quirk when renaming file

    - by Roy Rico
    Hi, I am finding when I try to rename a file in windows 7, i typically select the file i want to rename, hit F2 (rename) and type the new name. I've been getting this weird bug which the while file name gets selected again, sometimes right in the middle of me typing. I can sit there and watch it reselect the entire filename every second or so as if someone is invisibly hitting control+a while type. I have tried making sure that my keys aren't being pressed, and also have even gotten a new keyboard and this problem still consists. I know it'll probably be a long shot, but just hoping someone's seen this issue and has a resolution. UPDATE: This is on a desktop machine, not a laptop, so there's no issue with Trackpad.

    Read the article

  • Tip: Keeping the ADF Mobile PDF Guide up to date

    - by Chris Muir
    This is a little tip for customers using Oracle's ADF Mobile. If you're like me, it's possible you don't rely on the online HTML version of the Mobile Developer's Guide for ADF, but rather download a PDF version of the file to use locally (look to the "PDF" link to the top right of the guide).  For me the convenience of the PDF is it's faster, I can search the whole document easily, I can split read the document across two pages on my home monitor, if I lose my internet connection the document is still available, and it's easy to read on my iPad (especially on long haul flights to the US across the Pacific where there is no internet connection!). The trigger point for me to download the Oracle PDF documentation has always been on a new point release of JDeveloper.  However in the case of ADF Mobile, as an extension to JDeveloper it is releasing at a much faster and independent schedule to JDeveloper and this includes updates to the documentation. As such the 11.1.2.4.0 ADF Mobile PDF guide you have locally might be out of date and you should take the opportunity to download the latest version.  This is also particularly important for ADF Mobile as not only are many new features being added for each release and included in the new documentation, but the guide is under rapid improvement to clarify much of what has been written to date.  Our documentation teams are super responsive to suggestions on how to improve the guides and this often shows per point release. How do you tell you've the latest guide? Look to the document part number which right now is "E24475-03".  This is a unique ID per release for the document, the first part being the document number, and the part after the dash the revision number.  If the website document number has a higher revision number, time to download a new up to date PDF. One last thing to share, you can follow the ADF Mobile guide document manager Brian Duffield on Twitter to keep abreast of updates. Image courtesy of Stuart Miles / FreeDigitalPhotos.net

    Read the article

  • Pathfinding with MicroPather : costs calculations with sectors and portals

    - by Adan
    Hello, I'm considering using micropather to help me with pathfinding. I'm not using a discrete map : I'm working in 2d with sectors and portales. However, I'm just wondering what is the best way to compute costs with this library in this context. Just to be more clear about geometrical shapes I'm using : sectors are basically convex polygons, and portals are segments that lies on sector's edge. Micropather exposes a pure virtual Graph class that you must inherate and overrides 3 functions. I understand how pathfinding works, so there's no problem in overriding those functions. Right now, my implementation give me results, i.e I'm able to find a path in my map, but I'm not sure I'm using an optimal solution. For the AdjacentCost method : I just take the distance between sector's centers as the cost. I think a better solution should be to use the portal between the two sectors, compute its center, and then the cost should be : distance( sector A center, portal center ) + distance ( sector B center, portal center ). I'm pretty sure the approximation I'm using with just sector's center is enough for most cases, but maybe with thin and long sectors that are perpendicular, this approximation could mislead the A* algorithm. For the LeastCostEstimate method : I just take the midpoint of the two sectors. So, as you understand, I'm always working with sectors' centers, and it's working fine. And I'm pretty sure there's a better way to work. Any suggestions or feedbacks? Thanks in advance!

    Read the article

  • Use DOS batch to move all files up 1 directory

    - by Harminoff
    I have created a batch file to be executed through the right-click menu in Win7. When I right-click on a folder, I would like the batch file to move all files (excluding folders) up 1 directory. I have this so far: PUSHHD %1 MOVE "%1\*.*" ..\ This seems to work as long as the folder I'm moving files from doesn't have any spaces. When the folder does have spaces, I get an error message: "The syntax of the command is incorrect." So my batch works on a folder titled PULLTEST but not on a folder titled PULL TEST. Again, I don't need it to move folders, just files. And I would like it to work in any directory on any drive. There will be no specific directories that I will be working in. It will be random. Below is the registry file I made if needed for reference. Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\shell\PullFiles] @="PullFilesUP" [HKEY_CLASSES_ROOT\Directory\shell\PullFiles\command] @="\"C:\\Program Files\\MyBatchs\\PullFiles.bat\" \"%1\""

    Read the article

  • Splitting a tetris game apart - where to put time-management?

    - by nightcracker
    I am creating a tetris game in C++ & SDL, and I'm trying to do it "good" by making it object-oriented and keeping scopes small. So far I have the following structure: A main with some lowlevel SDL set up and handling input A game class that keeps track of score and provides the interface for main (move block down, etc) A map class that keeps track of the current game field, which blocks are where. Used by the game class. A block class that consists of the current falling block, used by game. A renderer class abstracting low level SDL to a format where you render "tetris blocks". Used by map and block. Now I have a though time where to place the time-management of this game. For example, where should be decided when a block bumps the bottom of the screen how long it takes the current block locks in place and a new block spawns? I also have an other unrelated question, is there some place where you can find some standard data on tetris like standard score tables, rulesets, timings, etc?

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >