Search Results

Search found 18151 results on 727 pages for 'upside down'.

Page 496/727 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • More elegant way to avoid hard coding the format of a a CSV file?

    - by dsollen
    I know this is trivial issue, but I just feel this can be more elegant. So I need to write/read data files for my program, lets say they are CSV for now. I can implement the format as I see fit, but I may have need to change that format later. The simply thing to do is something like out.write(For.getValue()+","+bar.getMinValue()+","+fi.toString()); This is easy to write, but obviously is guilty of hard coding and the general 'magic number' issue. The format is hard-coded, requires parsing of the code to figure out the file format, and changing the format requires changing multiple methods. I could instead have my constants specifying the location that I want each variable to be saved in the CSV file to remove some of the 'magic numbers'; then save/load into the an array at the location specified by the constants: int FOO_LOCATION=0; int BAR_MIN_VAL_LOCATION=1; int FI_LOCATION=2 int NUM_ARGUMENTS=3; String[] outputArguments=new String[NUM_ARGUMENTS]; outputArguments[FOO_LOCATION] = foo.getValue(); outputArgumetns[BAR_MIN_VAL_LOCATION] = bar.getMinValue(); outptArguments[FI_LOCATOIN==fi.toString(); writeAsCSV(outputArguments); But this is...extremely verbose and still a bit ugly. It makes it easy to see the format of existing CSV and to swap the location of variables within the file easily. However, if I decide to add an extra value to the csv I need to not only add a new constant, but also modify the read and write methods to add the logic that actually saves/reads the argument from the array; I still have to hunt down every method using these variables and change them by hand! If I use Java enums I can clean this up slightly, but the real issue is still present. Short of some sort of functional programming (and java's inner classes are too ugly to be considered functional) I still have no obvious way of clearly expressing what variable is associated with each constant short of writing (and maintaining) it in the read/write methods. For instance I still need to write somewhere that the FOO_LOCATION specifies the location of foo.getValue(). It seems as if there should be a prettier, easier to maintain, manner for approaching this? Incidentally, I'm working in java at the moment, however, I am interested conceptually about the design approach regardless of language. Some library in java that does all the work for me is definitely welcome (though it may prove more hassle to get permission to add it to the codebase then to just write something by hand quickly), but what I'm really asking is more about how to write elegant code if you had to do this by hand.

    Read the article

  • Evolution laggy due to IMAP -profile or due to some odd Sync -issue?

    - by Izzy
    I'm fighting with Evolution. Basically it's working fine -- but it is very slow to react in certain situations. Helper questions Could it be that changing away from Bonobo has to do with slowing-down? There might be some trouble with the new engine and "asynchronous actions". What to do about it? Are there e.g. any configuration files? I want to get the previous "working mood" back. How can I speed this thing up? Different scenarios when sending a mail, the composer window hangs there inactive for a couple of seconds, everything grayed out. Though there is a green check mark saying it's sent, I'm not sure a) why it's still blocking everything and b) whether I could simply close it without "breaking"/"losing" anything. In earlier versions, the composer window was closing pretty fast, and one could see the message being stored into the local "outbox" until it was sent, and one could immediately continue with the next task. I prefer that behaviour over the current, where I cannot do anything in the application until the window closes. switching between modules. Coming from mail and switching to the address book takes a couple of seconds. Same for switching to the calendar. I read about different "possible causes" and tried a few things: I only have 3 local address books, so no networking should be involved here. To make sure, I switched to offline mode and then tried to access the address book. No noticeable difference. I use 3 Google Calendars. Switching to offline mode made a minor difference, but so minor that it also could be "imagination" since one might have expected this in this case according to some reports, disabling the tasks should help. Well, it didn't in my case, as I don't use them regularly (just two local items stored here) Maybe I should also mention that I'm using the KDE4 desktop (so no Unity or Gnome, though both is installed on the computer). And I did not have this issue before I updated to 12.04.

    Read the article

  • Multiplayer tile based movement synchronization

    - by Mars
    I have to synchronize the movement of multiple players over the Internet, and I'm trying to figure out the safest way to do that. The game is tile based, you can only move in 4 directions, and every move moves the sprite 32px (over time of course). Now, if I would simply send this move action to the server, which would broadcast it to all players, while the walk key is kept being pressed down, to keep walking, I have to take this next command, send it to the server, and to all clients, in time, or the movement won't be smooth anymore. I saw this in other games, and it can get ugly pretty quick, even without lag. So I'm wondering if this is even a viable option. This seems like a very good method for single player though, since it's easy, straight forward (, just take the next movement action in time and add it to a list), and you can easily add mouse movement (clicking on some tile), to add a path to a queue, that's walked along. The other thing that came to my mind was sending the information that someone started moving in some direction, and again once he stopped or changed the direction, together with the position, so that the sprite will appear at the correct position, or rather so that the position can be fixed if it's wrong. This should (hopefully) only make problems if someone really is lagging, in which case it's to be expected. For this to work out I'd need some kind of queue though, where incoming direction changes and stuff are saved, so the sprite knows where to go, after the current movement to the next tile is finished. This could actually work, but kinda sounds overcomplicated. Although it might be the only way to do this, without risk of stuttering. If a stop or direction change is received on the client side it's saved in a queue and the char keeps moving to the specified coordinates, before stopping or changing direction. If the new command comes in too late there'll be stuttering as well of course... I'm having a hard time deciding for a method, and I couldn't really find any examples for this yet. My main problem is keeping the tile movement smooth, which is why other topics regarding synchronization of pixel based movement aren't helping too much. What is the "standard" way to do this?

    Read the article

  • Reasons to Use a VM For Development

    - by George Stocker
    Background: I work at a start-up company, where one team uses Virtual Machines to connect to a remote server to do their development, and another team (the team I'm on) uses local IIS/SQL Server 2005/Visual Studio installations to conduct work. Team VM is located about 1000 miles from Team Non-VM, and the servers the VMs run off of are located near Team VM (Latency, for those that are wondering, is about 50ms). A person high in the company is pushing for Team Non-VM to use virtual machines for programming, development, and testing. The latter point we agree on -- we want Virtual Machines to test configurations and various aspects of the web application in a 'clean' state. The Problem: What we don't agree on is having developers using RDP to connect to a desktop remotely that contains Visual Studio, SQL Server, and IIS to do the same development we could do locally on our laptops. I've tried the VM set-up, and besides the color issue, there is a latency issue that is rather noticeable, not to mention that since we're a start-up, a good number of employees work from home on occasion with our work laptops, and this move would cut off the laptops. They'd be turned in. Reasons to Use Remote VMs for Development (Not Testing!): Here are the stated reasons that this person wants us to use VMs: They work for TeamVM. They keep the source code "safe". If we want to work from home, we could just use our home PCs. Licenses (I don't know what the argument is, only that it's been used). Reasons not to use Remote VMs for Development: Here are the stated reasons why we don't want to use VMs: We like working from home. We get a lot done on our own time. We're not going to use our Home PCs to do work related stuff. The Latency is noticeable. Support for the VMs (if they go down, or if we need a new VM) takes a while. We don't have administrative privileges on the VM, and are unable to change settings as needed. What I'm looking for from the community is this: What reasons would you give for not using VMs for development? Keep in mind these are remote VMs -- this isn't a VM running on a local desktop. It's using the laptop (or a desktop) as a thin client for a remote VM. Also, on the other side of the coin: Is there something we're missing that makes VMs more palatable for development? Edit: I think 'safe' is used in term of corporate espionage, or more correctly if the Laptop gets stolen, the person who stole would have access to our source code. The former (as we've pointed out, is always going to be a possibility -- companies stop that with litigation, there isn't a technical solution (so far as I can see)). The latter point is ( though I don't know its usefulness in a corporate scenario) mitigated by Truecrypt'ing the entire volume.

    Read the article

  • Yet another frustum culling question

    - by Christian Frantz
    This one is kinda specific. If I'm to implement frustum culling in my game, that means each one of my cubes would need a bounding sphere. My first question is can I make the sphere so close to the edge of the cube that its still easily clickable for destroying and building? Frustum culling is easily done in XNA as I've recently learned, I just need to figure out where to place the code for the culling. I'm guessing in my method that draws all my cubes but I could be wrong. My camera class currently implements a bounding frustum which is in the update method like so frustum.Matrix = (view * proj); Simple enough, as I can call that when I have a camera object in my class. This works for now, as I only have a camera in my main game class. The problem comes when I decide to move my camera to my player class, but I can worry about that later. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; CurrentContainmentType = CamerasFrustrum.Contains(cubes.CollisionSphere); Can it really be as easy as adding those two lines to my foreach loop in my draw method? Or am I missing something bigger here? UPDATE: I have added the lines to my draw methods and it works great!! So great infact that just moving a little bit removes the whole map. Many factors could of caused this, so I'll try to break it down. cubeBoundingSphere = new BoundingSphere(cubePosition, 0.5f); This is in my cube constructor. cubePosition is stored in an array, The vertices that define my cube are factors of 1 ie: (1,0,1) so the radius should be .5. I least I think it should. The spheres are created every time a cube is created of course. ContainmentType CurrentContainmentType = ContainmentType.Disjoint; foreach (Cube block in cube.cubes) { CurrentContainmentType = cam.frustum.Contains(cube.cubeBoundingSphere); ///more code here if (CurrentContainmentType != ContainmentType.Disjoint) { cube.Draw(effect); } Within my draw method. Now I know this works because the map disappears, its just working wrong. Any idea on what I'm doing wrong?

    Read the article

  • Desparately Need Help: After a mishap, a folder shows 0 files in it

    - by bobby
    I'm hoping some of you guys may be able to shed some light on this scenario: I had a odt document on which I was working from one of many files in a folder among many on an internal hard-drive. Some kind of glitch occured and the document crashed (this could have been some kind of power charge whilst another hard drive was being unmounted). As I looked into the folders surrounding the folder in which my odt document was stored, they start to show 0 files in them. I immediately switched off the PC and then re-started. Upon the re-start, the folders would show the 1,000s of files I've stored in them and then within 5 minutes, as I started to back them up, freeze, cut-off the process of transfer. When I tried to open anything on the internal hard-drive, be it an avi film, an mp3, a cbr or a word doc, they all showed blank or would work. Some folders had vastly less files showing. Eventually, things calmed down. I closed the PC, checked that the connections were in firmly, gave it a vacuum and restarted the PC. All the files eventually showed up and I started to back them up (which I'd brought a hard drive for anyway but been distracted and not done). All folders show except the one which contained the document I was working on at the time of the trouble. Strangely, it was one that should itself full on several occasions on restarts. It shows zero files now. Properties shows zero files and zero space taken by it. Yet when I drop a file into this folder by pasting it in, it disappears too. Opening the folder, there is nothing there. But if I paste that document again, the PC asks would I like to replace the existing file with the same name (that I can't see), when I click yes, the file appears. When I exit, the folder shows the 0 files in the folder. Going back into the folder, it has disappeared again. I'm hoping that someone can help give me tips to recover the files in the folder, it would be greatly, greatly appreciated. All other films, music, comics, documents show and are fine!

    Read the article

  • Upgrades from Beta or CTP SQL Server Software is NOT Supported

    - by BuckWoody
    As of this writing, SQL Server 2008 R2 has released, and just like every release, I get e-mails and calls from folks with this question: “Can I upgrade from Customer Technical Preview (CTP) x or Beta #x or Release Candidate (RC) to the “Released to Manufacturing” (RTM) version?” No. Right up until the last minute, things are changing in the code – and you want that to happen. Our internal testing runs right up until the second we lock down for release, and we watch the CTP/RC/Beta reports to make sure there are no show-stoppers, and fix what we find. And it’s not just “big” changes you need to worry about – a simple change in one line of code can have a massive effect. I know, I know – you’ve possibly upgraded an RC or CTP to the RTM version and it worked “just fine”. But hear this tale: I’ve dealt with someone who faced this exact situation in SQL Server 2008. They upgraded (which is clearly prohibited in the documentation) from a CTP to the RTM version over a year ago. Everything was working fine. But then…one day they had an issue. Couldn’t fix it themselves, we took a look, days went by, and we finally had to call in the big guns for support. Turns out, the upgrade was the problem. So we had to come up with some elaborate schemes to get the system migrated over while they were in production. This was painful for everyone involved. So the answer is still no. Just don’t do it. There is one caveat to this story – if you are a “TAP” customer (you’ll know if you are), we help you move from the CTP products to RTM, but that’s a special case that we track carefully and send along special instructions and tools to help you along. That level of effort isn’t possible on a large scale, so it’s not just a magic tool that we run to upgrade from CTP to RTM. So again, unless you’re a TAP customer, it’s a no-no. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • 3D collision physics. Response when hitting wall, floor or roof

    - by GlamCasvaluir
    I am having problem with the most basic physic response when the player collide with static wall, floor or roof. I have a simple 3D maze, true means solid while false means air: bool bMap[100][100][100]; The player is a sphere. I have keys for moving x++, x--, y++, y-- and diagonal at speed 0.1f (0.1 * ftime). The player can also jump. And there is gravity pulling the player down. Relative movement is saved in: relx, rely and relz. One solid cube on the map is exactly 1.0f width, height and depth. The problem I have is to adjust the player position when colliding with solids, I don't want it to bounce or anything like that, just stop. But if moving diagonal left/up and hitting solid up, the player should continue moving left, sliding along the wall. Before moving the player I save the old player position: oxpos = xpos; oypos = ypos; ozpos = zpos; vec3 direction; direction = vec3(relx, rely, relz); xpos += direction.x*ftime; ypos += direction.y*ftime; zpos += direction.z*ftime; gx = floor(xpos+0.25); gy = floor(ypos+0.25); gz = floor(zpos+0.25); if (bMap[gx][gy][gz] == true) { vec3 normal = vec3(0.0, 0.0, 1.0); // <- Problem. vec3 invNormal = vec3(-normal.x, -normal.y, -normal.z) * length(direction * normal); vec3 wallDir = direction - invNormal; xpos = oxpos + wallDir.x; ypos = oypos + wallDir.y; zpos = ozpos + wallDir.z; } The problem with my version is that I do not know how to chose the correct normal for the cube side. I only have the bool array to look at, nothing else. One theory I have is to use old values of gx, gy and gz, but I do not know have to use them to calculate the correct cube side normal.

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • How do you make a bullet ricochet off a vertical wall?

    - by Bagofsheep
    First things first. I am using C# with XNA. My game is top-down and the player can shoot bullets. I've managed to get the bullets to ricochet correctly off horizontal walls. Yet, despite using similar methods (e.g. http://stackoverflow.com/questions/3203952/mirroring-an-angle) and reading other answered questions about this subject I have not been able to get the bullets to ricochet off a vertical wall correctly. Any method I've tried has failed and sometimes made ricocheting off a horizontal wall buggy. Here is the collision code that calls the ricochet method: //Loop through returned tile rectangles from quad tree to test for wall collision. If a collision occurs perform collision logic. for (int r = 0; r < returnObjects.Count; r++) if (Bullets[i].BoundingRectangle.Intersects(returnObjects[r])) Bullets[i].doCollision(returnObjects[r]); Now here is the code for the doCollision method. public void doCollision(Rectangle surface) { if (Ricochet) doRicochet(surface); else Trash = true; } Finally, here is the code for the doRicochet method. public void doRicochet(Rectangle surface) { if (Position.X > surface.Left && Position.X < surface.Right) { //Mirror the bullet's angle. Rotation = -1 * Rotation; //Moves the bullet in the direction of its rotation by given amount. moveFaceDirection(Sprite.Width * BulletScale.X); } else if (Position.Y > surface.Top && Position.Y < surface.Bottom) { } } Since I am only dealing with vertical and horizontal walls at the moment, the if statements simply determine if the object is colliding from the right or left, or from the top or bottom. If the object's X position is within the boundaries of the tile's X boundaries (left and right sides), it must be colliding from the top, and vice verse. As you can see, the else if statement is empty and is where the correct code needs to go.

    Read the article

  • JavaOne 2013: (Key) Notes of a conference – State of the Java platform and all the roadmaps by Amis

    - by JuergenKress
    Last week’s JavaOne conference provided insights in the roadmap of the Java platform as well as in the current state of things in the Java community. The close relationship between Oracle and IBM concerning Java, the (continuing) lack of such a relationship with Google, the support from Microsoft for Java applications on its Azure cloud and the vibrant developer community – with over 200 different Java User Groups in many countries of the world. There were no major surprises or stunning announcements. Java EE 7 (release in June) was celebrated, the progress of Java 8 SE explained as well as the progress on Java Embedded and ME. The availability of NetBeans 7.4 RC1 and JDK 8 Early Adopters release as well as the open sourcing of project Avatar probably were the only real news stories. The convergence of JavaFX and Java SE is almost complete; the upcoming alignment of Java SE Embedded and Java ME is the next big consolidation step that will lead to a unified platform where developers can use the same skills, development tools and APIs on EE, SE, SE Embedded and ME development. This means that anything that runs on ME will run on SE (Embedded) and EE – not necessarily the reverse because not all SE APIs are part of the compact profile or the ME environment. However, the trimming down of the SE libraries and the increased capabilities of devices mean that a pretty rich JVM runs on many devices – such as JavaFX 8 on the Raspberry PI. The major theme of the conference was Internet of Things. A world of things that are smart and connected, devices like sensors, cameras and equipment from cars, fridges and television sets to printers, security gates and kiosks that all run Java and are all capable of sending data over local network connections or directly over the internet. The number of devices that has these capabilities is rapidly growing. This means that the number of places where Java programs can help program the behavior of devices is growing too. It also means that the volume of data generated is expanding and that we have to find ways to harvest that data, possibly do a local pre-processing (filter, aggregate) and channel the data to back end systems. Terms typically used are edge devices (small, simple, publishing data), gateways (receiving data from many devices, collecting and consolidating, pre-processing, sending onwards to back end – typically using real time event processing) and enterprise services – receiving the data-turned-information from the gateways to further consolidate, distribute and act upon. A cheap device like the Raspberry PI is a perfect way to get started as a Java developer with what embedded (device) programming means and how interaction with physical input and output takes place. Roadmaps The over all progress on Java is visualized in this overview: Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Amis,OOW,Oracle OpenWorld,JavaOne,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

  • StringBuffer behavior in LWJGL

    - by Michael Oberlin
    Okay, I've been programming in Java for about ten years, but am entirely new to LWJGL. I have a specific problem whilst attempting to create a text console. I have built a class meant to abstract input polling to it, which (in theory) captures key presses from the Keyboard object and appends them to a StringBuilder/StringBuffer, then retrieves the completed string after receiving the ENTER key. The problem is, after I trigger the String return (currently with ESCAPE), and attempt to print it to System.out, I consistently get a blank line. I can get an appropriate string length, and I can even sample a single character out of it and get complete accuracy, but it never prints the actual string. I could swear that LWJGL slipped some kind of thread-safety trick in while I wasn't looking. Here's my code: static volatile StringBuffer command = new StringBuffer(); @Override public void chain(InputPoller poller) { this.chain = poller; } @Override public synchronized void poll() { //basic testing for modifier keys, to be used later on boolean shift = false, alt = false, control = false, superkey = false; if(Keyboard.isKeyDown(Keyboard.KEY_LSHIFT) || Keyboard.isKeyDown(Keyboard.KEY_RSHIFT)) shift = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMENU) || Keyboard.isKeyDown(Keyboard.KEY_RMENU)) alt = true; if(Keyboard.isKeyDown(Keyboard.KEY_LCONTROL) || Keyboard.isKeyDown(Keyboard.KEY_RCONTROL)) control = true; if(Keyboard.isKeyDown(Keyboard.KEY_LMETA) || Keyboard.isKeyDown(Keyboard.KEY_RMETA)) superkey = true; while(Keyboard.next()) if(Keyboard.getEventKeyState()) { command.append(Keyboard.getEventCharacter()); } if (Framework.isConsoleEnabled() && Keyboard.isKeyDown(Keyboard.KEY_ESCAPE)) { System.out.println("Escape down"); System.out.println(command.length() + " characters polled"); //works System.out.println(command.toString().length()); //works System.out.println(command.toString().charAt(4)); //works System.out.println(command.toString().toCharArray()); //blank line! System.out.println(command.toString()); //blank line! Framework.disableConsole(); } //TODO: Add command construction and console management after that } } Maybe the answer's obvious and I'm just feeling tired, but I need to walk away from this for a while. If anyone sees the issue, please let me know. This machine is running the latest release of Java 7 on Ubuntu 12.04, Mate desktop environment. Many thanks.

    Read the article

  • Tricky situation with EF and Include while projecting (Select / SelectMany)

    - by Vincent Grondin
    Originally posted on: http://geekswithblogs.net/vincentgrondin/archive/2014/06/07/tricky-situation-with-ef-and-include-while-projecting-select.aspxHello, the other day I stumbled on a problem I had a while back with EF and Include method calls and decided this was it and I was going to blog about it…  This would sort of to pin it inside my head and maybe help others too !  So I was using DBContext and wanted to query a DBSet and include some of it’s associations and in the end, do a projection to get a list of Ids…   At first it seems easy…  Query your DBSet, call Include right afterward, then code the rest of your statement with the appropriate where clause and then, do the projection…   Well it wasn’t that easy as my query required I code my where on some entities a few degree further in the association chain and most of these links where “Many”…  I had to do my projection right away with the SelectMany method.  So I did my stuff and tested the query….  no association where loaded…  My Include statement was simply ignored !  Then I remembered this behavior and how to get it to work…  You need to move the Include AFTER your first projection (Select or SelectMany).  So my sequence became:   Query the DBSet, do the projection with SelectMany, Include the associations, code the where clause and do the final projection…. but it wouldn’t compile…   It kept saying that it could not find an “Include” method on an IQueryable… which is perfectly true!  I knew this should work so I went to the definition of the DBset and saw it inherited DBQuery and sure enough the include method was there…  So I had to cast my statement from start until the end of the first projection in a DBQuery then do the Includes and then the rest of my query….   Bottom line is, whenever your Include statement seem to be ignored, then maybe you will need to move them further down in your query and cast your statement in whatever class gives you access to the Include…   Happy coding all !

    Read the article

  • Is multiple domain names and links from same IP causing poor search engine rankings?

    - by John
    I have an ecommerce website which is not doing so well in Google. I am trying to improve this of course, and am looking at some possibilities for why it isn't doing well. The website has four domain names, all of which have been indexed by Google. A few months ago I applied 301 redirects to any requests for two of the domain names so now it is down to two domain names (one is a .net, the other is a .com.au, the others were .net.au and .com). I prefer to use my main domain name (the .com.au), but one of the names has been around for a long time and has more inbound links. According to a PageRank tool, both are PR2. It is a Classic ASP site and up until recently had a lot of querystring parameters. In the last week or so I added URL rewriting so there is now no parameters for most pages. I don't do 301 redirects from the old URLs but instead I add the META canonical tag indicating the preferred new URL. At the same time I redesigned the site and improved title tags, META descriptions, and H tags but it hasn't been long enough yet for Google to index many of these yet. I also looked at what pages Google has indexed and strangely it has some strange pages in the index, there are a lot of pages which are actual keyword searches (more a bunch of random letters than an actual word). What I mean is that it is as if they had typed in something to search for in my search box - there are no links to pages like this and the only way of getting this is to type something in to the search box). So I added a META robots tag with noindex,nofollow anytime that I render pages like this. Years ago I set up a fake price comparison site which lists all my products and links back to my site. It has a different keyword rich domain name but is on the same server and same IP address. It's a completely different layout but does have the same product categories and product descriptions (although I have stripped formatting out of them so they are not identical except in text). I also have a few blog sites which again are on the same server/IP and all have advertising for the website. My questions are: What should I do with the multiple domains, just use one, or continue with two or more? Should I add 301 redirects, not just the META canonical tag? Any idea about Google indexing my search results page, and did I do the right thing with the META robots tag? Is the fake price comparison site likely to be causing problems? Are all the links to the site from other domain names but the same IP address likely to be causing problems? Thanks for any help. Sorry for so many questions in one.

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • ATI Radeon HD7000 Series (Laptop) - Switch Mode Between ATI & Intel Integrated GPU. Stuck on Boot Screen On Intel GPU Selection Mode

    - by Monkey Drone
    Laptop Specs: HP Pavilion G6-2020SE GPUs 1) ATI HD7000 Series 2) Intel Integrated OS Installed: x) Ubuntu 12.04 (64 bit) i) ATI Graphics Card Drivers Installed From AMD website. Note: Graphics Card Drivers are Working Fine in 3D Mode. It runs a little Hot as it should since its a GPU. Observation) AMD Catalyst Control Centre Lets me Choose If I want to run the system in HIGH-END (ATI GPU) OR Intel Integrated (Better battery life) While I am on High End GPU Choice, Ubuntu works fine. Problem) But when I switch to Intel Mode in the AMD CCC and reboot the Machine. Ubuntu goes into 'Low Graphics Mode'. The problem is not that it goes into low graphics mode, it is completely expected since I am no longer using the ATI GPU but the integrated Intel GPU. Problem starts with the 'Selection' of the options. During that screen, I have no mouse on the screen (even tried plugging in an external USB mouse) & No Keyboard functionality. Thus I am left completely disabled to choose any option and load into Ubuntu. The Only thing I can do is switch to a terminal and enable ATI GPU through command-line and Ubuntu works Fine again. Is it a bug that there is no mouse/keyboard available to me during the startup of Ubuntu when its launched in Low-Graphics Mode? Any suggestions on how to pass through that? My palms are sweating as I write this down because the ATI GPU is really heating up my laptop. I dont want to boot into Windows or keep it around any longer than necessary. Please advise with help and directions. Sincerely, MonkeyD Edit1: The Answer by Celso has helped me switch to Intel, thus giving me sufficient battery power. Kudos to Celso. Now I can at least use my laptop for the time being without having it burn hair off my skin. I am still looking for answer to my original question of, 'why is lightdm not working properly when I switch to Intel GPU using ATI HD7000 series official drivers provided by AMD'.

    Read the article

  • Curating projects of deceased friends

    - by Ant
    A very good friend of mine, and an avid programmer, recently passed away. He left nearly 40 projects on BitBucket. Most of them are public, but a few of them are marked as private. I've decided to take on curation duties for the projects rather than leave his work to disappear. If you have been in the same situation, what did you do? Did you open-source everything? Continue development? Delete it all? I'm very interested to hear other people's experiences. There are a few reasons why some of the projects are marked as private (private projects on BitBucket are visible only to invited users and the original creator): One of them is an iOS web app that was free in the app store. I've had to remove the app from the store as I'm shutting down his web sites as a favour to his widow. However, I've already made the app public under the GPL v3 (he was a big GPL supporter). One of them contains proprietary code. It can't be open-sourced. Others are very much work-in-progress. I don't know if he intended to make them into hosted, paid services or if he wanted to give the code away under an open-source licence when they were finished. Here's a list of the private projects: Some kind of living cell simulator that uses SBML along with Runge-Kutta and Euler algorithms to do... something. There's a fair amount of code here but I don't know what it does or how far along it is. No docs. An accountacy application; it seems to have a solid DB design behind it but there's little code on top of that. A website whose purpose is to suggest good restaurants. Built on yii. Seems to have a lot of code but I'd need to set up a WAMP stack to see how far along it is. A website intended to host memorials to people who suffered from the same problem he was. Built on Joomla. I'm not sure how much of the code is just Joomla and how much is custom; again, I'd need to get Joomla running to find out. I'd just introduced him to Mercurial and BitBucket. All of the private projects are single commits of codebases he wasn't using version control with/was previously using SVN. I don't have the SVN repositories so I can't see the commit logs.

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Simulate 'Shock absorbtion' with tire rubber in PhysX (2.8.x)

    - by Mungoid
    This is a kinda tricky question and I fear there is no easy enough solution, but I figured I'd hit SE up before giving up on it and just doing what I can. A machine I am working on has no suspension or shocks or springs of any sort in the real machine, so you would think that when it drives over bumps, it would shake like crazy but because its tires (6 of them) are quite large they seem to absorb a lot of shock from the bumps. Part of this is because the machine is around 30k lbs and it just smashes/compresses any bumps in the ground down (This is another issue im still working on) and the other part is that the tires seem to have a lot of flex to them with a lot of air as well. So my current task is to simulate shock absorption in physx without visibly separating the tires from the spindle/axle.. I have been messing with all kinds of NxMaterial, NxSpring, Joints, etc. and have had no luck getting this to work. The main problem is that the spindle attached to the tire is directly in the center and the axle is basically solidly attached to the chassis, so if i give it any spring or suspension travel, that spindle on the tires will move upwards or downwards, looking very odd because now its not any longer in the center of the tire. I tried giving it a higher restitution but that just makes it bouncy without any shock absorption. Another avenue I am messing with is to actively smooth the terrain in front of the tires so that before it hits a bumpy patch, that patch is smoothed and it doesn't bounce. The only issue with this is that it is pretty expensive to do with 6 tires, high tesselation of the terrain and other complex things going on at the same time in this simulation. I am still working on this but I am hoping to mix and match a few different aspects to get the best possible outcome. This is a bit of a complex issue so I'm not expecting anyone to have a definitive answer, just hoping someone may think of something I haven't =-) -Side note: Yes i know PhysX 2.8.x is quite outdated but we have to stick with it for this implementation. We are in the process of going to another physics engine but it is out of scope to apply that engine to this project.

    Read the article

  • 2D collision resolving

    - by Philippe Paré
    I've just worked out an AABB collision algorithm for my 2D game and I was very satisfied until I found out it only works properly with movements of 1 in X and 1 in Y... here it is: public bool Intersects(Rectanglef rectangle) { return this.Left < rectangle.Right && this.Right > rectangle.Left && this.Top < rectangle.Bottom && this.Bottom > rectangle.Top; } public bool IntersectsAny(params Rectanglef[] rectangles) { for (int i = 0; i < rectangles.Length; i++) { if (this.Left < rectangles[i].Right && this.Right > rectangles[i].Left && this.Top < rectangles[i].Bottom && this.Bottom > rectangles[i].Top) return true; } return false; } and here is how I use it in the update function of my player : public void Update(GameTime gameTime) { Rectanglef nextPosX = new Rectanglef(AABB.X, AABB.Y, AABB.Width, AABB.Height); Rectanglef nextPosY; if (Input.Key(Key.Left)) nextPosX.X--; if (Input.Key(Key.Right)) nextPosX.X++; bool xFree = !nextPosX.IntersectsAny(Boxes.ToArray()); if (xFree) nextPosY = new Rectanglef(nextPosX.X, nextPosX.Y, nextPosX.Width, nextPosX.Height); else nextPosY = new Rectanglef(AABB.X, AABB.Y, AABB.Width, AABB.Height); if (Input.Key(Key.Up)) nextPosY.Y--; if (Input.Key(Key.Down)) nextPosY.Y++; bool yFree = !nextPosY.IntersectsAny(Boxes.ToArray()); if (yFree) AABB = nextPosY; else if (xFree) AABB = nextPosX; } What I'm having trouble with, is a system where I can give float values to my movement and make it so there's a smooth acceleration. Do I have to retrieve the collision rectangle (the rectangle created by the other two colliding)? or should I do some sort of vector and go along this axis until I reach the collision? Thanks a lot!

    Read the article

  • How to implement a component based system for items in a web game.

    - by Landstander
    Reading several other questions and answers on using a component based system to define items I want to use one for the items and spells in a web game written in PHP. I'm just stuck on the implementation. I'm going to use a DB schema suggested in this series (part 5 describes the schema); http://t-machine.org/index.php/2007/09/03/entity-systems-are-the-future-of-mmog-development-part-1/ This means I'll have an items table with generic item properties, a table listing all of the components for an item and finally records in each component table used to make up the item. Assuming I can select the first two together in a single query, I'm still going to do N queries for each component type. I'm kind of fine with this because I can cache the data into memcache and check there first before doing any queries. I'll need to build up the items on every request they are used in so the implementation needs to be on the lean side even if they're pulled from memcache. But right there is where I feel confident about implementing a component system for my items ends. I figure I'd need to bring attributes and behaviors into the container from each component it uses. I'm just not sure how to do that effectively and not end up writing a lot of specialized code to deal with each component. For example an AttackComponent might need to know how to filter targets inside of a battle context and also maybe provide an attack behavior. That same item might also have a UsableComponent which allows the item to be used and apply some effect onto a different set of targets filtered differently from the same battle context. Then not every part of an item is an active part, an AttributeBonusComponent might need to only kick in when the item is in an equipped state or when displaying the item details page. Ultimately, how should I bring all of the components together into the container so when I use an item as a weapon I get the correct list of targets? Know when a weapon can also be used as an item? Or to apply the bonuses the item provides to a character object? I feel like I've gone too far down the rabbit hole and I can't grasp onto the simple solution in front of me. (If that makes any sense at all.) Likewise if I were to implement the best answer from here I feel like I'd have a lot of the same questions. How to model multiple "uses" (e.g. weapon) for usable-inventory/object/items (e.g. katana) within a relational database.

    Read the article

  • Damaged external NTFS hard disk

    - by Thanos
    A few days ago, I used my external hard disk (a 2TB Seagate) in order to transfer some files on Windows Vista. During that, I noticed some malfunctions on my system (it was running too slow, Windows Explorer crashed). When Explorer crashed, file transportation stopped. I was afraid, but I tried to access my files and it seemed to be working. I tried to open a movie (from the external disk) but it couldn't load. I thought of restarting, but this took sooo long... So I unplugged the hard disk and at that time it managed to shut down. I logged on to Windows Vista but the hard disk couldn't be mounted. I plugged it but nothing happened. I unplugged it and I heard this specific sound that notifies that something has been unplugged. I thought of logging to Ubuntu 10.04 and see what I can do. I plugged the hard disk, but I couldn't see it. I opened GParted but I couldn't see it either. I tried with Disc Utility and there it was! I tried to mount it but a got an error message stating that an error occured with Windows, there is a file (0,0) that has problem or something like that. It suggested to log into Windows and run chkdsk /f and reboot twice. The thing is that I am somehow afraid to do so because I don't really know the impact on that. Plus I don't trust doing even a check on Vista... I finally risked it and I typed chkdsk/f on a cmd. I cannot, however, actually run it because I don't have admin privileges. So from search I found chkdsk, I right cliked and selected “run as administrator”. It run but I got a message like NTFS file system. It should check at the coming restart. At that point I am mistaken. I thought that f meant F but this is not the case here... Does anyone have any suggestions and advice?

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >