Search Results

Search found 8224 results on 329 pages for 'sometimes'.

Page 175/329 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • "Oracle Coherence 3.5" Book - My Humble Review

    - by [email protected]
      After reviewing the book in more detail I say again that it is a great guide for sure. Lots of important concepts that sometimes can be somewhat confusing are deeply reviewed, including all types of caching schemes and backing maps, and the cache topologies with their corresponding performances and very useful "When to use it?" sections. Some functionalities that are very desirable or used a lot are reviewed with examples and best practices of implementation, including: Data affinity Querying Pagination Indexes Aggregations Event processing, listening and triggering Data persistence Security Regarding the networking and architecture topics, Coherence*Extend is exhaustively reviewed, including C++ and .NET clients, with very good tips and examples, even including source codes. Personally, I am also glad to see that the address providers (<address-provider> tag), new feature in Coherence 3.5 which is a way to programmatically provide well-known addresses in order to connect to the cluster, is mentioned on the book, because it provides new functionalities to satisfy some special configuration requirements for example: Provide a way to switch extend nodes in cases of failure Implement custom load balancing algorithms and/or dynamic discovery of TCP/IP connection acceptors Dynamically assign TCP address and port settings when binding to a server socket Another very interesting and useful section is the "Coherent Bank Sample Application", which is a great tutorial, useful to understand how Coherence interacts with third party products establishing a clear integration with them, including the use of non-Oracle products like MS Visual Studio.  

    Read the article

  • Code structure for multiple applications with a common core

    - by Azrael Seraphin
    I want to create two applications that will have a lot of common functionality. Basically, one system is a more advanced version of the other system. Let's call them Simple and Advanced. The Advanced system will add to, extend, alter and sometimes replace the functionality of the Simple system. For instance, the Advanced system will add new classes, add properties and methods to existing Simple classes, change the behavior of classes, etc. Initially I was thinking that the Advanced classes simply inherited from the Simple classes but I can see the functionality diverging quite significantly as development progresses, even while maintaining a core base functionality. For instance, the Simple system might have a Project class with a Sponsor property whereas the Advanced system has a list of Project.Sponsors. It seems poor practice to inherit from a class and then hide, alter or throw away significant parts of its features. An alternative is just to run two separate code bases and copy the common code between them but that seems inefficient, archaic and fraught with peril. Surely we have moved beyond the days of "copy-and-paste inheritance". Another way to structure it would be to use partial classes and have three projects: Core which has the common functionality, Simple which extends the Core partial classes for the simple system, and Advanced which also extends the Core partial classes for the advanced system. Plus having three test projects as well for each system. This seems like a cleaner approach. What would be the best way to structure the solution/projects/code to create two versions of a similar system? Let's say I later want to create a third system called Extreme, largely based on the Advanced system. Do I then create an AdvancedCore project which both Advanced and Extreme extend using partial classes? Is there a better way to do this? If it matters, this is likely to be a C#/MVC system but I'd be happy to do this in any language/framework that is suitable.

    Read the article

  • A Brief Soul Session with Joss Stone

    - by Oracle OpenWorld Blog Team
     By Karen Shamban The Oracle OpenWorld Music Festival is thrilled to have Joss Stone as one of its featured artists.  Stone took a few moments from her busy tour and travel schedule to answer a few questions for this blog, so read on:  Q. What do you like best about performing in front of a live audience?A. I love to bring the music to the people! It's all fun and games in the studio, and I love it, but the time comes when the world needs to hear it and it's nice to see their faces when they are hearing new songs. Q. Do you prefer smaller, intimate venues or larger, louder ones?  Why?A. I like the smaller ones sometimes, but it really depends on who is in the audience. I prefer it regardless of size when the audience is with you from the start and they dance and let the music take them over - as it does me when I'm on stage. Q. What about your fans surprises you?A. Not a lot really, they have always been very very sweet and polite and giving and loving. It doesn't surprise me because that's what the effect of music is. For the most part they are beautiful people. Little-known fact: Not only is Stone an award-winning musician, she acted in an award-winning television series, Showtime's The Tudors.  Stone played Anne of Cleves, Henry VIII's fourth wife.  Not only did she keep her cool - she kept her head. More about the Oracle OpenWorld Music Festival. More about Joss Stone.

    Read the article

  • wine 1.4 regedit makes screen flicker on 12.04 with dual monitor setup

    - by s1lv3r
    I have a dualmonitor setup running dual 23" on 1920x1080 which has the following problem: When running any wine application (for example "wine regedit" from console) the screen flickers and the windows have artifacts like this: Also sometimes taking a screenshot using the print key will make compiz crash (starter and all window bars/menus are gone) when an wine application is started. I don't have the same problems on my notebook which has the same setup. Only difference is the notebook has ATI Graphics and this PC has nvidia. This is the output of lshw -c video: *-display Beschreibung: VGA compatible controller Produkt: G72 [GeForce 7300 LE] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:07:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller bus_master cap_list rom Konfiguration: driver=nvidia latency=0 Ressourcen: irq:16 memory:fa000000-faffffff memory:d0000000-dfffffff memory:fb000000-fbffffff memory:fce00000-fce1ffff I also noticed that running xrandr from console makes the screen flicker for some seconds on this PC, which also doesn't happen on my notebook. Removing one screen from the setup will stop the flickering and the artifacts inside the wine applications from appearing. Does anybody have an advice what I could try to change to make this work?

    Read the article

  • improve Collision detection memory usage (blocks with bullets)

    - by Eddy
    i am making a action platform 2D game, something like Megaman. I am using XNA to make it. already made player phisics,collisions, bullets, enemies and AIs, map editor, scorolling X Y camera (about 75% of game is finished ). as i progressed i noticed that my game would be more interesting to play if bullets would be destroyed on collision with regular(stationary ) map blocks, only problem is that if i use my collision detection (each bullet with each block) sometimes it begins to lag(btw if my bullet exits the screen player can see it is removed from bullet list) So how to improve my collision detection so that memory usage would be so high? :) ( on a map 300x300 blocks for example don't think that bigger map should be made); int block = 0; int bulet= 0; bool destroy_bullet = false; while (bulet < bullets.Count) { while (block < blocks.Count) { if (bullets[bulet].P_Bul_rec.Intersects( blocks[block].rect)) {//bullets and block are Lists that holds objects of bullet and block classes //P_Bul_rec just bullet rectangle destroy_bullet = true; } block++; } if (destroy_bullet) { bullets.RemoveAt(bulet); destroy_bullet = false; } else { bulet++; } block = 0; }

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • mod_rewrite works within directory not on root

    - by Anvesh Saxena
    I am having problem in my RewriteRule for the tags portion. What I am able to debug is that the rule is been triggered at least because the page "tags.php" is been rendered but without the URL parameters. This .htaccess file with the rules is within root for my sub-domain and has following content for tags postion. # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ tags.php?tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ tags.php?tag_name=$1 RewriteRule ^tags/?$ tags.php?tag_name= Another problem that I ain't able to debug is that the similar .htaccess file exists for a directory within my sub domain and is working as expected with the necessary URL parameters also been available. The .htaccess file within the directory reads as follows # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ restAPI.php?type=tags&tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ restAPI.php?type=tags&tag_name=$1 RewriteRule ^tags/?$ restAPI.php?type=tags&tag_name= Could anyone point me the problem that I might be having in my Rewrite rules, I am also facing Internal server error sometimes which I am second guessing is due to the linked problem. Note:- I have Apache version 2.2.23 on my shared hosting.

    Read the article

  • How can I debug VNC screen repainting issues?

    - by stevecoh1
    I have what some might consider a trivial use for VNC, but I'd like to get it to work and it's technically interesting to me. My use case is that I'd like sometimes to be able to control my desktop from my living room while watching tv. The desktop runs Ubuntu, currently 12.04, but that may change soon. I'm using the default Vino server. I'd like to control it from my IPad and I have a nicely performing WiFi. I got the well-regarded (if reviews can be believed) app Vnc Viewer for the iPad. It's not working as well as I'd hoped. The problem is the speed of repainting. It's abysmally slow. I can click a close button, walk over to the desktop and see that the window has closed, but on the iPad, the VNC Client won't show the close for minutes if ever. I've noticed that CLOSING windows takes a lot longer to update than to open them. So the question is is this primarily client or server-caused? And if server-caused what can be done about it? Is Vino the best client or is something else better? Thanks

    Read the article

  • Using PreApplicationStartMethod for ASP.NET 4.0 Application to Initialize assemblies

    - by ChrisD
    Sometimes your ASP.NET application needs to hook up some code before even the Application is started. Assemblies supports a custom attribute called PreApplicationStartMethod which can be applied to any assembly that should be loaded to your ASP.NET application, and the ASP.NET engine will call the method you specify within it before actually running any of code defined in the application. Lets discuss how to use it using Steps : 1. Add an assembly to an application and add this custom attribute to the AssemblyInfo.cs. Remember, the method you speicify for initialize should be public static void method without any argument. Lets define a method Initialize. You need to write : [assembly:PreApplicationStartMethod(typeof(MyInitializer.InitializeType), "InitializeApp")] 2. After you define this to an assembly you need to add some code inside InitializeType.InitializeApp method within the assembly. public static class InitializeType {     public static void InitializeApp()     {           // Initialize application     } } 3. You must reference this class library so that when the application starts and ASP.NET starts loading the dependent assemblies, it will call the method InitializeApp automatically. Warning Even though you can use this attribute easily, you should be aware that you can define these kind of method in all of your assemblies that you reference, but there is no guarantee in what order each of the method to be called. Hence it is recommended to define this method to be isolated and without side effect of other dependent assemblies. The method InitializeApp will be called way before the Application_start event or even before the App_code is compiled. This attribute is mainly used to write code for registering assemblies or build providers. Read Documentation I hope this post would come helpful.

    Read the article

  • How do you manage a complexity jump?

    - by glenatron
    It seems an infrequent but common experience that sometimes you're working on a project and suddenly something turns up unexpectedly, throws a massive spanner in the works and ramps up the complexity a whole lot. For example, I was working on an application that talked to SOAP services on various other machines. I whipped up a prototype that worked fine, then went on to develop a regular front end and generally get everything up and running in a nice, fairly simple and easy to follow fashion. It worked great until we started testing across a wider network and suddenly pages started timing out as the latency of the connections and the time required to perform calculations on remote machines resulted in timed out requests to the soap services. It turned out that we needed to change the architecture to spin requests out onto their own threads and cache the returned data so it could be updated progressively in the background rather than performing calculations on a request by request basis. The details of that scenario are not too important - indeed it's not a great example as it was quite forseeable and people who have written a lot of apps of this type for this type of environment might have anticipated it - except that it illustrates a way that one can start with a simple premise and model and suddenly have an escalation of complexity well into the development of the project. What strategies do you have for dealing with these types of functional changes whose need arises - often as a result of environmental factors rather than specification change - later on in the development process or as a result of testing? How do you balance between avoiding the premature optimisation/ YAGNI/ overengineering risks of designing a solution that mitigates against possible but not necessarily probable issues as opposed to developing a simpler and easier solution that is likely to be as effective but doesn't incorporate preparedness for every possible eventuality?

    Read the article

  • Loading main javascript on every page? Or breaking it up to relevant pages?

    - by Kyle
    I have a 700kb decompressed JS file which is loaded on every page. Before I had 12 javascript files on each page but to reduce http requests I compressed them all into 1 file. This file is ~130kb gzipped and is served over gzip. However on the local computer it is still unpacked and loaded on every page. Is this a performance issue? I've profiled the javascript with firebug profiler but did not see any issues. The problem/illusion I am facing is there are jquery libraries compressed in that file that are sometimes not used on the current page. For example jquery datatables is 200kb compressed and that is only loaded on 2 of my website pages. Another is jqplot and that is another 200kb. I now have 400kb of excess code that isn't executed on 80% of the pages. Should I leave everything in 1 file? Should I take out the jquery libraries and load only relevant JS on the current page?

    Read the article

  • XNA C# Rectangle Intersect Ball on a Square

    - by user2436057
    I made a Game like Peggle Deluxe using C# and XNA for lerning. I have 2 rectangles a ball and a square field. The ball gets shoot out with a cannon and if the Ball hits the Square the Square disapears and the Ball flys away.But the Ball doesent spring of realistically, it sometimes flys away in a different direction or gets stuck on the edge. Thads my Code at the moment: public void Update(Ball b, Deadline dl) { ArrayList listToDelete = new ArrayList(); foreach (Field aField in allFields) { if (aField.square.Intersects(b.ballhere)) { listToDelete.Add(aField); Punkte = Punkte + 100; float distanceX = Math.Abs(b.ballhere.X - aField.square.X); float distanceY = Math.Abs(b.ballhere.Y - aField.square.Y); if (distanceX < distanceY) { b.myMovement.X = -b.myMovement.X; } else { b.myMovement.Y = -b.myMovement.Y; } } } It changes the X or Y axis depending on how the ball hits the Square but not everytimes. What could cause the problem? Thanks for your answer. Greetings from Switzerland.

    Read the article

  • How to write constructors which might fail to properly instantiate an object

    - by whitman
    Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object("/home/user/foo_file") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the "best practices" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?

    Read the article

  • Physics not synchronizing correctly over the network when using Bullet

    - by Lucas
    I'm trying to implement a client/server physics system using Bullet however I'm having problems getting things to sync up. I've implemented a custom motion state which reads and write the transform from my game objects and it works locally but I've tried two different approaches for networked games: Dynamic objects on the client that are also on the server (eg not random debris and other unimportant stuff) are made kinematic. This works correctly but the objects don't move very smoothly Objects are dynamic on both but after each message from the server that the object has moved I set the linear and angular velocity to the values from the server and call btRigidBody::proceedToTransform with the transform on the server. I also call btCollisionObject::activate(true); to force the object to update. My intent with method 2 was to basically do method 1 but hijacking Bullet to do a poor-man's prediction instead of doing my own to smooth out method 1, but this doesn't seem to work (for reasons that are not 100% clear to me even stepping through Bullet) and the objects sometimes end up in different places. Am I heading in the right direction? Bullet seems to have it's own interpolation code built-in. Can that help me make method 1 work better? Or is my method 2 code not working because I am accidentally stomping that?

    Read the article

  • Impossible to select folders and files with mouse (Ubuntu 12.04)

    - by François
    First-time post for me here (after being a regular reader for two years though) so thank you all for the quality of replies and help provided. My problem is very simple apparently but a tricky one. I just installed the Ubuntu 12.04(1) along with the Gnome3-shell environment on my new pc desktop Acer Aspire X3995 (see config below). Everything work (more or less) so far (I still have problems of sound and disabled 2-fingers gestures with my screen -- which I will have to deal with xconfig settings I think -- though), but the main problem is that I cannot select files/folders with my USB mouse. When I try to double click on them, nothing happen (sometimes one folder or file is selected but then unselected again). Note that the navigation works perfectly from the USB keyboard and from the touch-screen (I am using a 23" wide touch-screen Acer Monitor T231Hbmid). Also, the mouse works perfectly with other menu navigation, with the only difference that the text of certain menus is selected as if I was holding the left click on them. So I assume the problem is only related to the mouse. Needless to say that the usual basic hardware checks have been performed (unplugging, powered-off, etc.). My level is simply "advanced user", meaning that if you provide me with intelligible input I should find my way, but please don't expect too much technical/specific knowledge... :) Please let me know if you need more information on this bug. Now, fingers crossed... and thanks in advance! Ciao, François Config of Acer Aspire X3995: Ubuntu 12.04 / Gnome3-shell environment / Intel Core i5 3450 / nVidia GeForce 605, 1Gb. Screen: Acer Monitor TFT 23" wide T231Hbmid

    Read the article

  • Cannot install Ubuntu on an Acer Aspire One 756

    - by Byron807
    I have used Ubuntu before, in virtual machines, but today I decided to make the leap and I bought a netbook to install Ubuntu as a "real" OS alongside Windows. The netbook I bought is an Acer Aspire One 756, with a 64-bit Intel processor, 4GB RAM, and Windows 8 as the default OS. I have now encountered several obstacles that actually prevent me from installing Ubuntu 12.10. Here are all the things I have tried so far: Used a live CD, in combination with a USB DVD drive. (I should point out that the Aspire One does not have an optical drive.) The computer does not boot in Ubuntu; the drive keeps spinning, but nothing happens, even though I changed the boot order in the BIOS. Used a USB drive created via the tool available on pendrivelinux.com. Again, I've made changes to the BIOS to make sure the computer tries to boot from USB before using the built-in HDD. The results vary in this case: sometimes, the computer keeps rebooting like crazy until I remove the USB drive, at which point the computer boots into Windows 8, as expected. If I use a different USB drive, I get an error message that says that the USB drive has been blocked due to "the current security policy". Tried to install Ubuntu via Wubi. The program appears to install something, but at some point during the installation process, I get a non-specified error message and nothing else happens. I am not sure if these are known issues; in any case, searching the forum has not yielded any results, so I thought I should simply describe my problem here in the hope that this question has not been answered before. I would greatly appreciate any help with this annoying problem. Of course, if anything is unclear, do not hesitate to ask for further details.

    Read the article

  • GNOME changes to KDE on Maverick

    - by Pit
    Hi, I recently made a clean/fresh install of Ubuntu 10.10 an experienced the problem that the theme changes from Gnome to KDE and back randomly and partly. Sometimes after starting my computer I will have a KDE theme. If I then open the Appearance Preferences ( System - Preferences - Appearance ) some applications and the main-menu as well as menu of applications (only upper two centimetres of a application window) will change back to Gnome only by opening it, not changing anything nor saving. I did not choose KDE at any time, nor did I do any changes to the appearance prior to the first occurrence of this bug. On a second computer I updated from 10.04 to 10.10 and experienced the same bug. On this computer I did however change the layout of the minimize;maximise;close buttons by following How do I move the Window buttons from left to right?. But I don't see how this could provoke the bug, especially as it occurs on a second computer. Will doing updates I saw that there a quite a lot of KDE packages being downloaded. Do I even need those if I don't want to use KDE?

    Read the article

  • Using EC2 instance as main development platform

    - by David
    My problem I am working as a consultant for various companies. Each company provides me with a laptop with their software on and I also have my own, where I have my development environment. I tend to buy a new laptop every second year and find myself spending lots of time configuring and installing software. I also spend a lot of time waiting for my laptop to process things. To solve all these issues, I am now considering using EC2 (running windows instances) as my main development platform and just access this from any PC I happen to be at. I calculated that running the Large instance (cheapest 64-bit) for 8 hours a day for a year costs me 960$ per year, which is acceptable. I imagine that when I approach the workplace each day, I will make a single tap on my phone to fire up the instance, so it is ready when I get to work. I should have different icons on my phone to fire up the various instance types. The same software should of course automatically be loaded on the various hardware (sometimes I would even need their instance with 68.4 GB of memory). Another advantage is that if I am having a specific problem with my instance, I could fire up another instance and have someone look into the problem and update the image. My question: Does anyone have experience with such a setup on EC2? What kind of problems do you foresee?

    Read the article

  • Intermittent ethernet connectivity

    - by Amey Jah
    I am facing a weird problem. I am connecting to my dsl modem via ethernet. After booting, sometimes Ubuntu 10.10 does not detect the eth0 card properly or does not connect to the modem. I have made the following observations: My modem is fully on. All indicators are on except ethernet, which is blinking continuously. My network indicator applet informs me that I am connected to modem, but I am not able to browse as it has not established connection with modem. I need to power off and on (restart) the modem several times OR disconnect/connect (via network applet) several times, before the system can establish a connection with the modem. Once it establishes the connection, the ethernet LED (on modem) stops blinking and glows continuously. However, I do not see this problem every time I reboot or start my computer. 3/5 time, it connects to the modem in a single attempt. Friends, I am not able to understand root cause of this problem. Plus I do not know what kind of logs should be attached. If you want any logs please give me command to run, and I will paste the result for you. Note: I ran system testing when above problem happened. Follow this link to download result.

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Is there a secure way to add a database troubleshooting page to an application?

    - by Josh Yeager
    My team makes a product (business management software) that our customers install on their own servers. The product uses a SQL database for data storage and app configuration. There have been quite a few cases where something strange happened in the customer's database (caused by bugs in our app and also sometimes admins who mess with the database). To figure out what is wrong with the data, we have to send SQL scripts to the customer and tell them how to run them on the database server. Then, once we know how to fix it, we have to send another script to repair the data. Is there a secure way to add a page in our application that allows an application admin to enter SQL scripts that read and write directly to the database? Our support team could use that to help customers run these scripts, without needing direct access to the SQL server. My big concerns are that someone might abuse this power to get data they shouldn't have and maybe to erase or modify data that they shouldn't be able to modify. I'm not worried about system admins, because they could find another way to do the same thing. But what if someone else got access to the form? Is there any way to do this kind of thing securely?

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • dpkg error when using apt-get install

    - by V-T
    I upgraded to Ubuntu 14.04 from 12.04 and every time I use apt-get install for any package it ends with a bunch of errors about processing some of my latex packages. Including a snippet below: Sometimes, not accepting conffile updates in /etc/texmf/updmap.d causes updmap-sys to fail. Please check for files with extension .dpkg-dist or .ucf-dist in this directory dpkg: error processing package tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of lmodern: lmodern depends on tex-common (>= 3); however: Package tex-common is not configured yet. Reproduced by using sudo dpkg --configure -a and a total list of packages with this error is included here: Errors were encountered while processing: tex-common texlive-publishers tex-gyre texlive-latex-extra-doc texlive-fonts-extra-doc texlive-lang-english texlive-luatex texlive-generic-recommended texlive-pstricks-doc texlive-fonts-recommended latex2html latex-xcolor texlive-pictures texlive-fonts-extra texlive-pictures-doc asymptote texlive-bibtex-extra texlive-latex-recommended-doc texlive-latex-recommended doxygen-latex texlive-pstricks tipa texlive-latex-base texlive-fonts-recommended-doc latex-beamer texlive-font-utils texlive-latex-base-doc texlive-latex-extra texlive-extra-utils texlive texlive-publishers-doc lmodern Any ideas on how to fix this?

    Read the article

  • From Oracle PL/SQL Developer to Java programmer - Is it a good decision? [on hold]

    - by user3554231
    I will explain my question in simple words. I have little over 1 year experience in Oracle. My dream is to be "called" as a 'Developer', be it database developer if not software developer. But right now I don't develop anything neither I am in good touch with PL/SQL and other Oracle Utilites like SQL*LOADER, shell scripting and stuff like that as I am only a System Analyst where I analyze and configure database using SQL queries. To be honest, I know very basic PL/SQL and good knowledge in SQL but that won't ever give me a chance to be a developer as I am lagging way behind the "real" developers knowledge. Now I feel I should learn JAVA as well so that I can cope up with the competition. But I am too scared to learn new things as it will take much more time which will indirectly increase my useless work experince(just analyzing) which values nothing in todays market. Moreover that, I am too lazy to work hard i.e. to study and not to work during office hours. To sum it up I am lazy and confused and scared but I want to learn things as well but don't know if I am intelligent enough to learn whole of PL/SQL or to master any other language. Is there any other way from which I can feel confident? Actually I even feel sometimes that after 2-3 years if I still don't achieve my goal, I won't ever be able to reach my destination. I just want to live my dream of being a developer. Give me some tips and hopes but not false hopes.

    Read the article

  • How do you structure your shared code so that it is "re-findable" for new developers?

    - by awmckinley
    I started working at my current job about 8 months ago, and its been one of the best experiences I've had as a young programmer. It's a small company, and both my co-developers are brilliant guys. One of the practices that they both have been encouraging is lots of code-reuse. Our code base is mainly C#, and we're using a centralized revision control system. The way the repository is currently structured, there is a single folder in which all shared class libraries are placed (along with unit tests for each library), and our revision control system allows for sharing or linking those libraries out to other projects. What I'm trying to understand at this point is how the current structure of the folder can be made more conducive for finding those libraries again. I've talked to the other developers about this, and they agree that it's gotten a little messy. I find that I am sometimes "reinventing the wheel" because I didn't realize that there was an existing piece of code that solved a particular problem. The issue is complicated further by the fact that we're sharing some code between ASP.NET MVC2, WinForms, and Windows CE projects, and sharing code between applications built against multiple versions of .NET. How do other people approach this? Is the answer in naming the libraries in a certain way or is it preferable to invest in some code-search software? Is the answer in doc comments? Should we be sharing libraries at all or should we simply branch the class libraries for re-use? Thanks for any and all help!

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >