Search Results

Search found 3749 results on 150 pages for 'updating'.

Page 81/150 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • Banshee crashes consistently - is there a fix?

    - by user36334
    Since updating to ubuntu 11.10 I've had trouble with banshee. In particular when I run it I find that it crashes within an hour without fail. I get the following Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.NullReferenceException: Object reference not set to an instance of an object at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.DisposeResolver () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.Dispose () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.ServiceBrowser.OnItemRemove (Int32 interface, Protocol protocol, System.String name, System.String type, System.String domain, LookupResultFlags flags) [0x00000] in <filename unknown>:0 at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.MulticastDelegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvoke (System.Object[] args) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.HandleSignal (NDesk.DBus.Message msg) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.DispatchSignals () [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.Iterate () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.DBusManager.IterateThread (System.Object o) [0x00000] in <filename unknown>:0 Does anyone else also have this problem?

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • ArchBeat Link-o-Rama for 2012-05-30

    - by Bob Rhubart
    Roll Your Own Solaris Blogroll | Larry Wake blogs.oracle.com Larry Wake shares an easy way to find bloggers who write about various aspects of Oracle Solaris. Updating metadata in a WebCenter Content Presenter template | Yannick Ongena yonaweb.be Oracle ACE Yannick Ongena explains "how we can add a link to the content presenter that will open a popup where we can update the metadata of the content." Enable Content editing of Iterative components | Stefan Krantz blogs.oracle.com "The key aspect of this architectural solution," explains Stefan Krantz, "is to support a data type that allows for grouping of editable elements like Plain text, Images and Rich Text, each group of elements must support a infinite amount of grouped repetitions (Rows)." Call for Nominations: Oracle Fusion Middleware Innovation Awards 2012 - Win a free pass to #OOW12 www.oracle.com These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Either a customer, their partner, or an Oracle representative can submit the nomination form on behalf of the customer. Submission deadline: July 17. Winners receive a free pass to Oracle OpenWorld 2012 in San Francisco. ODTUG Kscope12 - June 24-28 - San Antonio, TX kscope12.com June 24-28, 2012 San Antonio, TX Kscope12, sponsored by ODTUG, is your home for Application Express, BI and Oracle EPM, Database Development, Fusion Middleware, and MySQL training by the best of the best! Thought for the Day "CIOs and the IT department cannot stop disruptive technology changes any more than the business managers can. Business managers have to, and are, embracing the new technologies because if they don’t, they, and their business units, will become irrelevant and disappear under the competitive conditions of the market." — Andy Mulholland Source: Capgemini CTO Blog

    Read the article

  • Java applet game design no keyboard focus

    - by Sri Harsha Chilakapati
    THIS IS PROBABLY THE WRONG PLACE. POSTED ITHERE (STACKOVERFLOW) I'm making an applet game and it is rendering, the game loop is running, the animations are updating, but the keyboard input is not working. Here's an SSCCE. public class Game extends JApplet implements Runnable { public void init(){ // Initialize the game when called by browser setFocusable(true); requestFocus(); requestFocusInWindow(); // Always returning false GInput.install(this); // Install the input manager for this class new Thread(this).start(); } public void run(){ startGameLoop(); } } And Here's the GInput class. public class GInput implements KeyListener { public static void install(Component c){ new GInput(c); } public GInput(Component c){ c.addKeyListener(this); } public void keyPressed(KeyEvent e){ System.out.println("A key has been pressed"); } ...... } This is my GInput class. When run as an applet, it doesn't work and when I add the Game class to a frame, it works properly. Thanks

    Read the article

  • Suspend/resume issue hp envy m6

    - by aikeru
    :) I just installed Ubuntu 13.10 on my HP envy m6. When I press the "power" button and select to suspend, it looks like it goes to sleep (power light flashing, as expected). When I press the power button to wake it (or anything else), it stops flashing and goes solid (as it should) but the screen is black. I tried CTRL+ALT+F1 but nothing seems to happen. Is there something I can do to get Ubuntu to resume from suspend? EDIT: I tried getting any updates for Ubuntu as well as updating my BIOS. Neither seems to have any effect. EDIT: Tried the 2nd script from Ubuntu 12.04 LTS suspend fix and it seemed to have no effect. Tried the 1st script and now I can't seem to suspend at all - Ubuntu seems to try to suspend and then immediately goes to the password prompt. With some combination of things I seem to be able to get a new crash, though, involving wpa_supplicant. I allowed it to submit feedback. My WiFi seems to be working fine (I assume this is what WPA is). EDIT: I know this is 'askubuntu', but I did try latest Linux Mint / Cinnamon. Results were exactly the same (black screen, power light solid after waking up - just as before, I can make it reboot by mashing CTRL+ALT+DEL but CTRL+ALT+F1/etc. doesn't work). Trying Ubuntu 12.04 next. EDIT: Ubuntu 12.04 WORKS! At least it seems to with a quick test. Came right back up after suspend. If I don't have any other problems and no answers are suggested, I'll make my own answer and accept it. Maybe someone can use this information to help troubleshoot the real issue. Would be nice to be on the latest version... but suspend would've been a deal-breaker for me. Hope this info helps someone.

    Read the article

  • Session serialization in JavaEE environment

    - by Ionut
    Please consider the following scenario: We are working on a JavaEE project for which the scalability starts to become an issue. Up until now, we were able to scale up but this is no longer an option. Therefore we need to consider scaling out and preparing the App for a clustered environment. Our main concern right now is serializing the user sessions. Sadly, we did not consider from the beginning the issue and we are encountering the following excetion: java.io.WriteAbortedException: writing aborted; java.io.NotSerializableException: org.apache.catalina.session.StandardSessionFacade I did some research and this exception is thrown because there are objects stored on the session which does not implement the Serializable interface. Considering that all over the app there are quite a few custom objects which are stored on the session without implementing this interface, it would require a lot of tedious work and dedication to fix all these classes declaration. We will fix all this declarations but the main concern is that, in the future, there may be a developer which will add a non Serializable object on the session and break the session serialization & replication over multiple nodes. As a quick overview of the project, we are developing using a home grown framework based on Struts 1 with the Servlet 3.0 API. This means that at this point, we are using the standard session.getAttribute() and session.setAttribute() to work with the session and the session handling is scattered all over the code base. Besides updating the classes of the objects stored on session and making sure that they implement the Serializable interface, what other measures of precaution should we take in order to ensure a reliable Session replication capability on the Application layer? I know it is a little bit late to consider this but what would be the best practice in this case? Furthermore, are there any other issues we should consider regarding this transition? Thank you in advance!

    Read the article

  • How can I resolve component types in a way that supports adding new types relatively easily?

    - by John
    I am trying to build an Entity Component System for an interactive application developed using C++ and OpenGL. My question is quite simple. In my GameObject class I have a collection of Components. I can add and retrieve components. class GameObject: public Object { public: GameObject(std::string objectName); ~GameObject(void); Component * AddComponent(std::string name); Component * AddComponent(Component componentType); Component * GetComponent (std::string TypeName); Component * GetComponent (<Component Type Here>); private: std::map<std::string,Component*> m_components; }; I will have a collection of components that inherit from the base Components class. So if I have a meshRenderer component and would like to do the following GameObject * warship = new GameObject("myLovelyWarship"); MeshRenderer * meshRenderer = warship->AddComponent(MeshRenderer); or possibly MeshRenderer * meshRenderer = warship->AddComponent("MeshRenderer"); I could be make a Component Factory like this: class ComponentFactory { public: static Component * CreateComponent(const std::string &compTyp) { if(compTyp == "MeshRenderer") return new MeshRenderer; if(compTyp == "Collider") return new Collider; return NULL; } }; However, I feel like I should not have to keep updating the Component Factory every time I want to create a new custom Component but it is an option. Is there a more proper way to add and retrieve these components? Is standard templates another solution?

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • Starting out with OpenGL when most tutorials are out of date

    - by AUTO
    I'm sure there are already a bunch of questions like this asked, but the constant updating of the OpenGL library throws them all away, and in a month or two, the answers here will be worthless again. I am ready to start programming in OpenGL using C++. I've got a working compiler (DevCpp; do NOT ask me to switch to VC++, and don't ask me why). Now I'm just looking for a solid tutorial on how to program with OpenGL. My assistant found the tutorial provided by NeHe Productions, but as I've come to find out, it's WAY OUT OF DATE! (although I did pull together a basic window to support an OpenGL canvas) Then I went online, and found the OpenGL SuperBible, which apparently uses freeglut? But what I'd like to know is whether or not SuperBible 5th edition is up to date any longer. The suggestion to freeglut I found said the latest version was 2.6.0 but now it's 2.8.0! Is the OpenGL SuperBible still a good, and fairly up-to-date place to start? Is there a better place to go to learn OpenGL? Am I allowed to simply store freeglut in the DevCpp include directory (maybe in GL), or is there some important procedure? Are there any comments or suggestions that I didn't think to ask since I'm only just beginning? @dreta cleared some things up for me, so now I have a better idea of what to ask: I think I'd like to start out with OpenGL using a wrapper library instead of directly accessing OpenGL.I just think that, for a beginner, it would be easier for me to program and get good results, while I don't yet have to understand all the grimy details (as @stephelton mentioned). The problem is, I can't find any library that doesn't have undefined references to no longer supported functions. Freeglut sounds operational, but it still uses GLU.Does anyone know what I can do?Also, I tried compiling the first SuperBible's source, but I got errors since GLAPI is not being defined as a type, the error originating in the GLU library. I'd like to use the SuperBible, but I don't know how to fix this.

    Read the article

  • Misconceptions about purely functional languages?

    - by Giorgio
    I often encounter the following statements / arguments: Pure functional programming languages do not allow side effects (and are therefore of little use in practice because any useful program does have side effects, e.g. when it interacts with the external world). Pure functional programming languages do not allow to write a program that maintains state (which makes programming very awkward because in many application you do need state). I am not an expert in functional languages but here is what I have understood about these topics until now. Regarding point 1, you can interact with the environment in purely functional languages but you have to explicitly mark the code (functions) that introduces them (e.g. in Haskell by means of monadic types). Also, AFAIK computing by side effects (destructively updating data) should also be possible (using monadic types?) but is not the preferred way of working. Regarding point 2, AFAIK you can represent state by threading values through several computation steps (in Haskell, again, using monadic types) but I have no practical experience doing this and my understanding is rather vague. So, are the two statements above correct in any sense or are they just misconceptions about purely functional languages? If they are misconceptions, how did they come about? Could you write a (possibly small) code snippet illustrating the Haskell idiomatic way to (1) implement side effects and (2) implement a computation with state?

    Read the article

  • Unity 3D doesn't work on ubuntu 12.04 LTS in virtual box hosted by Windows 7 64bit

    - by Mario
    Can't google out the solution through some time so I'm asking here. I have installed Linux Ubuntu 12.04 LTS in virtual box hosted by Windows 7 64bit Simple my 3D stopped working after some kernel headers update<<< few months ago. Just like that. I don't member which version was it. In meantime there was 2 or 3 new releases of VirtualBox which I have installed. Every time I am updating VirtualBox Guest Additions to the newest version. My 3D in Ubuntu still doesn't work. root@pjadmin-VirtualBox:~# /usr/lib/nux/unity_support_test -p OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no My graphics card in PC is ATI 6850. Please help.

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • How to build the mainline kernel source package?

    - by Maxime R.
    Ubuntu kernel PPA only provides linux-headers*.deb and linux-image*.deb packages. How can I build the corresponding linux-source*.deb package ? Context: I'm currently running Ubuntu 11.10 with the mainline kernel (3.2 rc6 now) to get a better support for my sandybridge IGP (Dell E6420 laptop with intel i5-2520M CPU). Appears, i'd like to install this touchpad driver, ALPS touchpads being badly supported (see previous link bug report), while waiting for upstream support in kernel version 3.3. Problem is, DKMS keeps complaining about not finding the full kernel source: Module build for the currently running kernel was skipped since the kernel source for this kernel does not seem to be installed. Appears I may not need the full source but I'd still like to try having it installed to see if it solve my problem. What I tried : Uncompressing the kernel.org source archive in /usr/src/. DKMS still complaining. Manually updating the kernel source package with uupdate and the mainline source package like explained here. Did not succeed. Manually building the linux-source package following @roadmr and @elmicha instructions. I eventually succeeded to build it but DKMS still complained about the missing source. At last I noticed an error I did not catch in the first place while reinstalling the kernel headers. Appears the .deb I got may have been corrupted, downloading it again did the trick :) Alas, while DKMS agreed to compile the module i ran into the following error which appears to have already been reported. This issue isn't yet solved but I won't try to because of the following: in the end I decided to test the precise kernel version 3.2-rc6 through the xorg-edgers ppa which appears to be correctly patched: it works. Nevertheless, it might still be of some interest to know how to build the mainline linux-source package as the Ubuntu Kernel Team doesn't provide it. Not to mention that I learned a lot in the process ^^

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • How to fix "Disk drive for /boot/efi is not ready or not present"?

    - by N.N.
    After I updated BIOS/UEFI version to 1101 on an Asus P8Z68-V PRO motherboard Ubuntu (11.10) did not boot. After POST all I saw was a black screen with a blinking cursor in the top left corner. I booted an Ubuntu 11.10 live-CD and set the flag for the 20 MB partition before my boot partition to "bios_grub". Then I was able to boot and login. But now every time I boot and Ubuntu loads I get the following message: Disk drive for /boot/efi is not ready or not present. Continue waiting or press s to skip or m for manual recovery. I am able to login if I choose to ignore it by pressing s, but what does this message mean? How can I fix what the message warns about? After logging in I have noticed that /boot/efi is empty. The following forum post speaks of the same issue ubuntuforums.org/showthread.php?t=1893030. Updating to the latest BIOS/UEFI - version 3203, did not have any effect on this issue.

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • About online game servers and how to handle data

    - by TreantBG
    So my question isn't about what technology to use or how to do this or that, but a more general question. I'm currently developing a action third person shooter. With elements of RPG - weapon,armor upgrades and items. Players will be able to create new games or join old ones. So my question is how to create the game server that players will play in. I have two ideas on my mind. The player who made the game is the server. All data passes trough him and he send this data to the server updating the database of the players with their XP points kills/deaths score and other. Or my host machine is the server, the player who made the game just will open new instance on my host and will be like client. And all players send their input data to the host, the host updates the game and send response back to client for any new changes like where is the enemy and other. And if i choose option 1 is there a chance the host to change the game content and manipulate the game results? (I think there is but i'm not sure) And if i choose option 2 isn't that raising the response time and potentially the game lag? or maybe there is another option?

    Read the article

  • Is there an excuse for excessively short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • Ubuntu 12.10: Installing proprietary Nvidia driver causes freeze at boot

    - by Greg
    Ok, so I just installed Ubuntu on my laptop, and I immediately encountered an issue: the HDMI audio output won't work. Yes, I know about the sound settings thing where you have to select the HDMI option, but even when it's selected I get no sound out of the TV I'm hooking it up to. This is a dealbreaker for me, because my laptop speakers are terrible, it's one of the big reasons I use my TV monitor. So I decided to work on solving the problem by upgrading my Nvidia drivers. I switched to one of the propriety drivers offered in that software updating utility that comes with the OS, the one option that said (tested). Viola, sound over the HDMI is now working. Unfortunately, this now brings me to my next problem: when I reboot Ubuntu with this or any other proprietary driver installed, it freezes when it tries to load my desktop. As in I can see my wallpaper, but no icons or options of any kind. The system is totally frozen, and gives me one of those "we've experienced an error, do you want to report it messages." So there's my bind. I need HDMI audio out, that's a total dealbreaker for me, but installing the drivers that give me that capability crash the system. Does anyone have any idea what's causing this

    Read the article

  • Cannot get temperatures in Dell Studio 1558

    - by Athul Iddya
    I could never get proper temperatures on my Dell Studio 1558. lm-sensors and acpi give wrong readings. The output of sensors is, $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) acpi -V gives me, $ acpi -V Battery 0: Full, 100% Battery 0: design capacity 414 mAh, last full capacity 369 mAh = 89% Adapter 0: on-line Thermal 0: ok, 0.0 degrees C Thermal 0: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 0: trip point 1 switches to mode passive at temperature 95.0 degrees C Thermal 0: trip point 2 switches to mode active at temperature 71.0 degrees C Thermal 0: trip point 3 switches to mode active at temperature 55.0 degrees C Thermal 1: ok, 26.8 degrees C Thermal 1: trip point 0 switches to mode critical at temperature 100.0 degrees C Thermal 1: trip point 1 switches to mode active at temperature 71.0 degrees C Thermal 1: trip point 2 switches to mode active at temperature 55.0 degrees C Cooling 0: LCD 0 of 15 Cooling 1: Processor 0 of 10 Cooling 2: Processor 0 of 10 Cooling 3: Processor 0 of 10 Cooling 4: Processor 0 of 10 Cooling 5: Fan 0 of 1 Cooling 6: Fan 0 of 1 I suspect even hddtemp gives bogus readings as its always at 46 $ sudo hddtemp /dev/sda /dev/sda: ST9500420AS: 46°C I have gone through some bug reports and some used to have the same problem after resuming from suspend. But I always have this problem. I had updated to the latest BIOS from Windows a couple of weeks ago, will updating from Ubuntu change anything? CORRECTION: hddtemp's readings do change. Its now at 45.

    Read the article

  • Is there an excuse for short variable names?

    - by KChaloux
    This has become a large frustration with the codebase I'm currently working in; many of our variable names are short and undescriptive. I'm the only developer left on the project, and there isn't documentation as to what most of them do, so I have to spend extra time tracking down what they represent. For example, I was reading over some code that updates the definition of an optical surface. The variables set at the start were as follows: double dR, dCV, dK, dDin, dDout, dRin, dRout dR = Convert.ToDouble(_tblAsphere.Rows[0].ItemArray.GetValue(1)); dCV = convert.ToDouble(_tblAsphere.Rows[1].ItemArray.GetValue(1)); ... and so on Maybe it's just me, but it told me essentially nothing about what they represented, which made understanding the code further down difficult. All I knew was that it was a variable parsed out specific row from a specific table, somewhere. After some searching, I found out what they meant: dR = radius dCV = curvature dK = conic constant dDin = inner aperture dDout = outer aperture dRin = inner radius dRout = outer radius I renamed them to essentially what I have up there. It lengthens some lines, but I feel like that's a fair trade off. This kind of naming scheme is used throughout a lot of the code however. I'm not sure if it's an artifact from developers who learned by working with older systems, or if there's a deeper reason behind it. Is there a good reason to name variables this way, or am I justified in updating them to more descriptive names as I come across them?

    Read the article

  • Amazon CloudFormations and Oracle Virtual Assembly Builder

    - by llaszews
    Yesterday I blogged about AWS AMIs and Oracle VM templates. These are great mechanisms to stand up an initial cloud environment. However, they don't provide the capability to manage, provision and update an environment once it is up and running. This is where AWS Cloud Formations and Oracle Virtual Assembly Builder comes into play. In a way, these tools/frameworks pick up where AMIs and VM templates leave off. Once again, there a similar offers from AWS and Oracle that compliant and also overlap with each other. Let's start by looking at the definitions: AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. AWS CloudFormations Oracle Virtual Assembly Builder - Oracle Virtual Assembly Builder makes it possible for administrators to quickly configure and provision entire multi-tier enterprise applications onto virtualized and cloud environments. Oracle VM Builder As with the discussion around should you use AMI or VM Templates, there are pros and cons to each: 1. CloudFormation is JSON, Assembly Builder is GUI and CLI 2. VM Templates can be used in any private or public cloud environment. Of course, CloudFormations is tied to AWS public cloud

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >