Search Results

Search found 23265 results on 931 pages for 'justin case'.

Page 399/931 | < Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >

  • I can't "unmaximize" my window

    - by Beska
    I've got a windows app (I don't think it matters which one, but in case you're wondering, it's SQL Server Profiler) that I can't put back into "windowed" mode. I can maximize or minimize it, either by right-clicking on the task bar and selecting maximize...or if the window is already maximized, I can click the minimize button to minimize it... The problem is when I click the middle button...the one that toggles between maximized and "windowed" mode, the windowed mode just makes it disappear. The program is still running fine, and I can bring it back up (maximized) by selecting it in the task bar. It doesn't seem to be hanging out on any of the edges of the screen...far as I can tell, it's just not there. And, of course, the app is "smart" enough to remember its status, so restarting the app doesn't help. Has anyone seen this? Know how to fix it?

    Read the article

  • Need advise on choosing aws EC2

    - by Mayank
    I'm planning to host a website where in the first phase I would target 30,000 users. It is in php and runs on Apache server. I'm assuming 8,000 users can be online in worst case scenario and 1000 of them will be uploading photographs. A photograph will be resized to around 1MB at client side and one HTTP request is uploading only one photograph. My plan: 2 Small EC2 instances to run Apache httpd 2 Small EC2 instances to DB (Postgresql). I to write data and other its read replica. EBS volumes for DBs Last, Amazon S3 for uploaded photographs. My question here Is Small EC2 instance more than what I require. I mean should I go for micro Is 8000 simultaneous user a right no. (to decide what EC2 instance to choose) for a new website Or should I go for Small instance so to make it capable of spikes

    Read the article

  • How to reset display settings in XFCE \ Ubuntu 12.04 and also flgrx drivers

    - by Agent24
    I recently upgraded to Ubuntu 12.04 and since I hate unity I installed the Xubuntu package and am using XFCE instead. Since I have a Radeon HD5770 I also installed the fglrx drivers. This all went fine (aside from the fact that the post-release update fglrx drivers have an error on installation and Ubuntu thinks they're not installed when they actually are. I configured my display settings (dual monitors, a 17" CRT on VGA and a 17" LCD on DVI) in the amdcccle program and everything was perfect. THEN, 2 days ago, I accidentally clicked on the "Display" settings in XFCE "settings" manager. After that, everything got screwed. Now, I normally run the CRT at 1152x854 and the LCD at 1280x1024 with the CRT as my primary monitor (with panel) and the LCD without panels etc just to display other windows when I want to drag them over there. The problem is now that if I set my CRT to 1152x864, it stays at 1280x1024 virtually and half the stuff falls off the screen. It also puts the LCD at 1280x1024 BUT then overlays the CRT's display ontop with different wallpaper in an L shape down the right-hand and bottom edges. In short, nothing makes sense and everything is FUBAR. I tried uninstalling fglrx through synaptic, and renaming xorg.conf and also the xfce XML file that has monitor settings but it still won't make sense. Unity on the other hand can currently set everything normally so the problem appears to be only with XFCE. In any case, I can't even get the fglrx drivers back, when I re-installed them, I can't run amdccle anymore as it says the driver isn't installed!! Can someone help me reset my XFCE settings so the monitors aren't screwed with some incorrect virtual desktop size and also so I can get fglrx drivers back and working? I really don't want to have to format and reinstall and go through all the hassle but it looks like I may have to :(

    Read the article

  • Needs free/ opensource network monitoring tool for office LAN

    - by Amit Ranjan
    I know there must be a lot similar questions on SU. Let me explain my setup first. I have 4-5 PC, Laptops and Few Android Phones in my office. To get them on a network , I have a UTStarCom, WA3002G1 ADSL2+ router with a landline broadband connection which has nothing to do with any PC except the configuration settings. Broadband channel is always on, we need to switch on the router and the internet is ready for us. No Internet Connection sharing is done via any PC. I have a limited 20GB monthly plan, which is consumed in 10-20 days, depending upon the download requirements. So in the above case, i need some suggestions from you: How do I monitor my Internet Bandwidth along-with the connected systems, realtime? Any free opensource tool available? Tweaks / Changes in PC to save bandwidth as my ISP do not have any Unlimited plan. PC and Laptops are Windows XP and/Or windows 7. Either of the platform tools are welcome.

    Read the article

  • SQL Maintenance Cleanup Task 'Success' But not deleting files

    - by Seph
    I have a maintenance plan setup for a databases on a server. As part of the backup is a Maintenance Cleanup Task. SQL Version 2008 The task that 'succeeds' is setup as: Delete backup files Correct folder (same address as the backup task) File extension: bak (NOT .bak) Delete files older than: 20 Hour(s) I have other similar cleanup tasks that occur in the same maintenance plan which work fine. This plan has worked fine in the past, I just noticed that last night it reported 'success' and the rest of the plan continued, however the file from 2 days ago still remains. I have checked similar questions such as this question, and this is not the case as my maintenance task worked fine two days ago and for the past several weeks:

    Read the article

  • Have you ever used kon-boot?

    - by Ctrl Alt D-1337
    Has anyone here ever used kon-boot? I guess it may work because of the few blog posts about it but I feel kinda concerned and am interested at hearing experiences from anyone who have used multiple times with no side effects. I am slightly worried for any direct memory altering it tries to do. I am also worried if this will do its job fine to hide the fact it puts in a low level trojan or if the author planned to do anything like that in a future release as it looks like closed source from the site. Also I don't intend to gain illegal access but I find these sort of things very useful for my box of live discs I take every where, just in case. OT: Other question that me be of interest to readers here

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • rkhunter: right way to handle warnings further?

    - by zuba
    I googled some and checked out two first links it found: http://www.skullbox.net/rkhunter.php http://www.techerator.com/2011/07/how-to-detect-rootkits-in-linux-with-rkhunter/ They don't mention what shall I do in case of such warnings: Warning: The command '/bin/which' has been replaced by a script: /bin/which: POSIX shell script text executable Warning: The command '/usr/sbin/adduser' has been replaced by a script: /usr/sbin/adduser: a /usr/bin/perl script text executable Warning: The command '/usr/bin/ldd' has been replaced by a script: /usr/bin/ldd: Bourne-Again shell script text executable Warning: The file properties have changed: File: /usr/bin/lynx Current hash: 95e81c36428c9d955e8915a7b551b1ffed2c3f28 Stored hash : a46af7e4154a96d926a0f32790181eabf02c60a4 Q1: Is there more extended HowTos which explain how to deal with different kind warnings? And the second question. Were my actions sufficient to resolve these warnings? a) To find the package which contains the suspicious file, e.g. it is debianutils for the file /bin/which ~ > dpkg -S /bin/which debianutils: /bin/which b) To check the debianutils package checksums: ~ > debsums debianutils /bin/run-parts OK /bin/tempfile OK /bin/which OK /sbin/installkernel OK /usr/bin/savelog OK /usr/sbin/add-shell OK /usr/sbin/remove-shell OK /usr/share/man/man1/which.1.gz OK /usr/share/man/man1/tempfile.1.gz OK /usr/share/man/man8/savelog.8.gz OK /usr/share/man/man8/add-shell.8.gz OK /usr/share/man/man8/remove-shell.8.gz OK /usr/share/man/man8/run-parts.8.gz OK /usr/share/man/man8/installkernel.8.gz OK /usr/share/man/fr/man1/which.1.gz OK /usr/share/man/fr/man1/tempfile.1.gz OK /usr/share/man/fr/man8/remove-shell.8.gz OK /usr/share/man/fr/man8/run-parts.8.gz OK /usr/share/man/fr/man8/savelog.8.gz OK /usr/share/man/fr/man8/add-shell.8.gz OK /usr/share/man/fr/man8/installkernel.8.gz OK /usr/share/doc/debianutils/copyright OK /usr/share/doc/debianutils/changelog.gz OK /usr/share/doc/debianutils/README.shells.gz OK /usr/share/debianutils/shells OK c) To relax about /bin/which as I see OK /bin/which OK d) To put the file /bin/which to /etc/rkhunter.conf as SCRIPTWHITELIST="/bin/which" e) For warnings as for the file /usr/bin/lynx I update checksum with rkhunter --propupd /usr/bin/lynx.cur Q2: Do I resolve such warnings right way?

    Read the article

  • How atomic is a SELECT INTO?

    - by leo.pasta
    Last week I got an interesting situation that prompted me to challenge a long standing assumption. I always thought that a SELECT INTO was an atomic statement, i.e. it would either complete successfully or the table would not be created. So I got very surprised when, after a “select into” query was chosen as a deadlock victim, the next execution (as the app would handle the deadlock and retry) would fail with: Msg 2714, Level 16, State 6, Line 1 There is already an object named '#test' in the database. The only hypothesis we could come up was that the “create table” part of the statement was committed independently from the actual “insert”. We can confirm that by capturing the “Transaction Log” event on Profiler (filtering by SPID0). The result is that when we run: SELECT * INTO #results FROM master.sys.objects we get the following output on Profiler: It is easy to see the two independent transactions. Although this behaviour was a surprise to me, it is very easy to workaround it if you feel the need (as we did in this case). You can either change it into independent “CREATE TABLE / INSERT SELECT” or you can enclose the SELECT INTO in an explicit transaction: SET XACT_ABORT ON BEGIN TRANSACTION SELECT * INTO #results FROM master.sys.objects COMMIT

    Read the article

  • No NFC for the iPhone, and here's why

    - by David Dorf
    I, like many others in the retail industry, was hoping the iPhone 5 would include an NFC chip that enabled a mobile wallet.  In previous postings I've discussed the possible business case and the foreshadowing of Passbook, but it wasn't meant to be.  A few weeks ago I was considering all the rumors, and it suddenly occurred to me that it wasn't in Apple's best interest to support an NFC chip.  Yes they have patents in this area, but perhaps they are more defensive than indicating new development. Steve Jobs wanted to always win, but more importantly he didn't want others to win at his expense.  It drove him nuts that Windows was more successful than MacOS, and clearly he was bothered by Samsung and other handset manufacturers copying the iPhone.  But he was most angry at Google for their stewardship of Android. If the iPhone 5 had an NFC chip, who would benefit most?  Google Wallet is far and away the leader in NFC-based payments via mobile phones in the US.  Even without Steve at the helm, Apple isn't going to do anything to help Google.  Plus Apple doesn't like to do things in an open way -- then they lose control.  For example, you don't see iPhones with expandable memory, replaceable batteries, or USB connectors.  Adding a standards-based NFC chip just isn't in their nature. So I don't think Apple is holding back on the NFC chip for the 5S or 6.  It just isn't going to happen unless they can figure out how to prevent others from benefiting from it. All the other handset manufacturers will use NFC as a differentiator, which may be enough to keep Google and Isis afloat, and of course Square and PayPal aren't betting on NFC anyway.  This isn't the end of alternative payments, its just a major speed bump.

    Read the article

  • Roadmap for Thinktecture IdentityServer

    - by Your DisplayName here!
    I got asked today if I could publish a roadmap for thinktecture IdentityServer (idrsv in short). Well – I got a lot of feedback after B1 and one of the biggest points here was the data access layer. So I made two changes: I moved to configuration database access code to EF 4.1 code first. That makes it much easier to change the underlying database. So it is now just a matter of changing the connection string to use real SQL Server instead of SQL Compact. Important when you plan to do scale out. I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. Then there are some other small changes: The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. I made certificate based endpoint identities for SSL endpoints optional. This caused some problems with configuration and versioning of existing clients. I hope I can release the RC in the next days. If there are no major issues, there will be RTM very soon!

    Read the article

  • "Don't do programming after a few years of starting career". Is this a fair advice?

    - by Muhammad Yasir
    I am a little experienced developer having approximately 5 years experience in PHP and somewhat less in Java, C# and trying to learn some Python nowadays. Since the start of my career as a programmer I have been told every now and then by fellow programmers that programming is suitable for a few early years of a career (most of them take it as 5 years) and that one must change the direction after it. The reason they present include headaches and pressures associated with programming. They also say that programmers are less social and don't usually like to give time to their families, etc. and especially "Oh come on, you can not do programming your entire life!" I am somewhat confused here and need to ask others about it. If I leave programming then what do I do?! I guess teaching may be a good option in this case, but it will require to first earn a PhD degree perhaps. It may also be noteworthy that in my country (Pakistan) the life of a programmer is not very good in that normally they must give 2-3 extra hours in the office to accomplish urgent programming tasks. I have a sense that situation is somewhat similar in other countries and regions as well. Do you think it is fair advice to change career from programming to something else after spending 5 years in this field? UPDATE Oh wow... I never knew people can have 40+ years of experience in this field. I am both excited and amazed seeing that people are doing it since 1971... That means 15 years before my birth! It is nice to be able to talk to such experienced people, we don't get such a chance here in Pakistan.

    Read the article

  • What are the correct set of DLLs and placement for SSL support in mIRC on 64 bit Windows?

    - by honkbish
    I am using mIRC 6.35 on a fresh install of Windows 7 64-bit. No matter what versions of the OpenSSL DLLs I use, nor where I place them, I cannot get mIRC to work with SSL; I get the "ssl not supported" error. The recommended DLLs on mIRC's help page (/ssl.html on the mIRC site) do not work no matter if I put them in the mIRC Program Files folder or anywhere else. Same with the DLLs from http://www.slproweb.com/products/Win32OpenSSL.html which also require Visual C++ runtimes. I am unsure if I need the 32bit DLLs (because mIRC itself is 32 bit), or the 64-bit DLLs, nor where to correctly place them. (Perhaps I currently have a case of incorrect DLLs in a path I am not aware of overriding the other placements...) Does ANYONE have any tips for 'debugging' this, or do they themselves have it working? Thanks in advance!

    Read the article

  • Access Officejet Pro L7590 memory card reader

    - by luri
    I can't manage to access my printer's memory card reader in Nautilus. I can just access it with hp-unload. Here's a sample output from this command: lubuntu@L-X6:~$ hp-unload hp:/net/Officejet_Pro_L7500?zc=HP065193 HP Linux Imaging and Printing System (ver. 3.10.6) Photo Card Access Utility ver. 3.3 Copyright (c) 2001-9 Hewlett-Packard Development Company, LP This software comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to distribute it under certain conditions. See COPYING file for more details. Using device: hp:/net/Officejet_Pro_L7500?zc=HP065193 |error: Photo card write failed (Card may be write protected) / Photocard on device hp:/net/Officejet_Pro_L7500?zc=HP065193 mounted DO NOT REMOVE PHOTO CARD UNTIL YOU EXIT THIS PROGRAM warning: Photo card is write protected. Type 'help' for a list of commands. Type 'exit' to quit. pcard: / > ls \ Name Size Type dcim/ directory eos_digi.tal 0 B unknown/unknown 1 files, 0 B pcard: / > cd dcim |pcard: /dcim > ls | Name Size Type . directory .. directory 100eos5d/ directory 267canon/ directory 270canon/ directory 271canon/ directory 272canon/ directory 0 files, 0 B pcard: /dcim > cd 272canon -pcard: /dcim/272canon > ls \ Name Size Type . directory .. directory _mg_7201.jpg 3.1 MB image/jpeg ...........(some more files)................. _mg_7281.jpg 2.5 MB image/jpeg _mg_7282.jpg 2.5 MB image/jpeg 82 files, 241.6 MB (253377883) How can I acess it from nautilus or mount it as a filesystem? Note that this is similar to this other question: Can't get HP Officejet 6500 card reader to work. but actually there seemed to be no supported device here, while in my case I manage to access the memory card from hp-unload.

    Read the article

  • How to I get a rotated sprite to move left or right?

    - by rphello101
    Using Java/Slick 2D, I'm using the mouse to rotate a sprite on the screen and the directional keys (in this case, WASD) to move the spite. Forwards and backwards is easy, just position += cos(ang)*speed or position -= cos(ang)*speed. But how do I get the sprite to move left or right? I'm thinking it has something to do with adding 90 degrees to the angle or something. Any ideas? Rotation code: int mX = Mouse.getX(); int mY = HEIGHT - Mouse.getY(); int pX = sprite.x+sprite.image.getWidth()/2; int pY = sprite.y+sprite.image.getHeight()/2; double mAng; if(mX!=pX){ mAng = Math.toDegrees(Math.atan2(mY - pY, mX - pX)); if(mAng==0 && mX<=pX) mAng=180; } else{ if(mY>pY) mAng=90; else mAng=270; } sprite.angle = mAng; sprite.image.setRotation((float) mAng); And the movement code (delta is change in time): Input input = gc.getInput(); Vector2f direction = new Vector2f(); Vector2f velocity = new Vector2f(); direction.x = (float) Math.cos(Math.toRadians(sprite.angle)); direction.y = (float) Math.sin(Math.toRadians(sprite.angle)); if(direction.length()>0) direction = direction.normalise(); //On a separate note, what does this line of code do? velocity.x = (float) (direction.x * sprite.moveSpeed); velocity.y = (float) (direction.y * sprite.moveSpeed); if(input.isKeyDown(sprite.up)){ sprite.x += velocity.x*delta; sprite.y += velocity.y*delta; }if (input.isKeyDown(sprite.down)){ sprite.x -= velocity.x*delta; sprite.y -= velocity.y*delta; }if (input.isKeyDown(sprite.left)){ //??? }if (input.isKeyDown(sprite.right)){ //??? }

    Read the article

  • Ubuntu 12.10 boots to purple or black screen but intermittently boots fine

    - by Nic
    I have a fresh install of Ubuntu 12.10 64bit dual booting with Win7 64bit. Windows boots fine every time. When I choose Ubuntu from Grub2 menu it will sometimes boot just fine. Most of the times though it gets stuck at a purple screen with nothing happening and no keys or key combinations working. Other times instead of the purple screen I get a black screen with a flashing cursor at the top. Nothing happens. I need to hold down the power button to restart and after a couple times of trying it will eventually boot into Ubuntu. Once that happens everything runs without any problems. I have tried different approaches to fix the problem but to no avail. I tried removing "quiet splash", used no splash, and nomodeset What I got from this was seeing all the text of the boot process but more often than not the process gets stuck right after recognizing all the USB ports and devices. If it gets stuck nothing happens (except when i plug in a usb device: it still recognizes it with a new line of text) In the case when the boot process works, after it lists the usb devices it tells me something like: recovery of read-only filesystem necessary. (its the filesystem that ubuntu runs on) then it does the recovery and i get: recovery complete. after that Ubuntu will boot properly and I get to see the login screen. I have no idea what to do to fix that problem. I have to reboot 3 to 5 times everytime I want to get into Ubuntu and I feel like I'm breaking my new Laptop. (its a lenovo ideapad z580 btw. i5 processor and nvidia gtx640 graphics card) I hope someone can help me. Thanks. Edit: i just got a "failed to enable AA error" message when waking it up from suspend. I don't know if that helps or has anything to do with the boot probs.

    Read the article

  • Transport rules on a live@edu instance to filter SSNs

    - by wbfreema
    Has anyone implemented transport rules on a live@edu(or whatever Microsoft is calling it these days) instance to reject the delivery of messages that include SSN's? I see how to set up transport rules in Exchange 2007, and even how to do it in the Windows Live Admin page, though the rules there don't seem to allow for regexes, can anyone confirm this? If this is the case has anyone ever connected Powershell to a live@edu instance to implement the code found at the bottom of this page: http://technet.microsoft.com/en-us/library/aa997187.aspx What I really need is a concise how-to.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • Home-Router: Access internal server using external ip [migrated]

    - by user15863
    If I've got a typical home router -- say a Net Gear -- which has certain ports forwarded to a internal server, is there a way to tweak the router to let me access that internal server using the external IP address from within the same network? Is there a non-enterprise grade router that can handle this type of thing? In case that was strangely worded, let me re-phrase with an example. My external IP is 1.2.3.4. My internal server is 10.4.3.100 Port 1178 is being forwarded from the router to 10.4.3.100. I'd like to be able to be able to hit 10.4.3.100 from an internal ip of 10.4.3.10 by using the external ip of 1.2.3.4. Possible?

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • Implementing game rules in a tactical battle board game

    - by Setzer22
    I'm trying to create a game similar to what one would find in a typical D&D board game combat. For mor examples you could think of games like Advance Wars, Fire Emblem or Disgaea. I should say that I'm using design by component so far, but I can't find a nice way to fit components into the part I want to ask. I'm struggling right now with the "game rules" logic. That is, the code that displays the menu, allows the player to select units, and command them, then tells the unit game objects what to do given the player input. The best way I could thing of handling this was using a big state machine, so everything that could be done in a "turn" is handled by this state machine, and the update code of this state machine does different things depending on the state. This approach, though, leads to a large amount of code (anything not model-related) to go into a big class. Of course I can subdivide this big class into more classes, but it doesn't feel modular and upgradable enough. I'd like to know of better systems to handle this in order to be able to upgrade the game with new rules without having a monstruous if/else chain (or switch / case, for that matter). So, any ideas? I'd also like to ask that if you recommend me a specific design pattern to also provide some kind of example or further explanation and not stick to "Yeah you should use MVC and it'll work".

    Read the article

  • How to evaluate a user against optimal performance?

    - by Alex K
    I have trouble coming up with a system of assigning a rating to player's performance. Well, technically there is is a trivial rating system, but I don't like it because it would mean assigning negative scores, which I think most players will be discouraged by. The problem is that I only know the ideal number of actions to get the desired result. The worst case is infinite number of actions, so there is no obvious scale. The trivial way I referred to above is to take score = (#optimal-moves - #players-moves), with ideal score being zero. However, psychologically people like big numbers. No one wants to win by getting a mark of 0. I wonder if there is a system that someone else has come up with before to solve this problem? Essentially I wish to score the players based on: How close they've come to the ideal solution. Different challenges will have different optimal number of actions, so the scoring system needs to take that into account, e.g. Challenge 1 - max 10 points, Challenge 2 - max 20 points. I don't mind giving the players negative scores if they've performed exceptionally badly, I just don't want all scores to be <=0

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • SQL SERVER – Validating Unique Columnname Across Whole Database

    - by pinaldave
    I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. Asker: “Our business requirement is when we add new column we want it unique across current database.” Pinal: “Why do you have such requirement?” Asker: “Do you know the solution?” Pinal: “Sure I can come up with the answer but it will help me to come up with an optimal answer if I know the business need.” Asker: “Thanks – what will be the answer in that case.” Pinal: “Honestly I am just curious about the reason why you need your column name to be unique across database.” (Silence) Pinal: “Alright – here is the answer – I guess you do not want to tell me reason.” Option 1: Check if Column Exists in Current Database IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn') BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END Option 2: Check if Column Exists in Current Database in Specific Table IF EXISTS (  SELECT * FROM sys.columns WHERE Name = N'NameofColumn' AND OBJECT_ID = OBJECT_ID(N'tableName')) BEGIN SELECT 'Column Exists' -- add other logic END ELSE BEGIN SELECT 'Column Does NOT Exists' -- add other logic END I guess user did not want to share the reason why he had a unique requirement of having column name unique across databases. Here is my question back to you – have you faced a similar situation ever where you needed unique column name across a database. If not, can you guess what could be the reason for this kind of requirement?  Additional Reference: SQL SERVER – Query to Find Column From All Tables of Database Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BDD/TDD vs JAD?

    - by Jonathan Conway
    I've been proposing that my workplace implement Behavior-Driven-Development, by writing high-level specifications in a scenario format, and in such a way that one could imagine writing a test for it. I do know that working against testable specifications tends to increase developer productivity. And I can already think of several examples where this would be the case on our own project. However it's difficult to demonstrate the value of this to the business. This is because we already have a Joint Application Development (JAD) process in place, in which developers, management, user-experience and testers all get together to agree on a common set of requirements. So, they ask, why should developers work against the test-cases created by testers? These are for verification and are based on the higher-level specs created by the UX team, which the developers currently work off. This, they say, is sufficient for developers and there's no need to change how the specs are written. They seem to have a point. What is the actual benefit of BDD/TDD, if you already have a test-team who's test cases are fully compatible with the higher-level specs currently given to the developers?

    Read the article

< Previous Page | 395 396 397 398 399 400 401 402 403 404 405 406  | Next Page >