Search Results

Search found 4109 results on 165 pages for 'plan'.

Page 89/165 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • The premier support for Sun Cluster 3.1 ended

    - by JuergenS
    In October 2011 the premier support for Sun Cluster 3.1 ended. See details in Oracle Lifetime Support Policy for Oracle and Sun System Software document. There no 'Extended Support' and the 'Sustaining Support Ends' is indefinite. But for indefinite 'Sustaining Support' I like to point out from the mentioned document (version Sept. 2011) on page 5: Sustaining Support does NOT include: * New program updates, fixes, security alerts, general maintenance releases, selected functionality releases and documentation updates or upgrade tools * Certification with most new third-party products/versions and most new Oracle products * 24 hour commitment and response guidelines for Severity 1 service requests *Previously released fixes or updates that Oracle no longer supports This means Solaris 10 9/10 update9 is the last qualified release for Sun Cluster 3.1. So, Sun Cluster 3.1 is not qualified on Solaris 10 8/11 Update10. Furthermore there is an issue around with SVM patch 145899-06 or higher. This SVM patch is part of Solaris 10 8/11 Update10. The 145899-06 is the first released patch of this number, therefore the support for Sun Cluster 3.1 ends with the previous SVM patches 144622-01 and 139967-02. For details about the known problem with SVM patch 145899-06 please refer to doc 1378828.1. Further this means you should freeze (no patching, no upgrade) your Sun Cluster 3.1 configuration not later than Solaris 10 9/10 update9. Or even better plan an upgrade to Solaris Cluster 3.3 now to get back to full support.

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • Announcing Spacewalk Support for Oracle Linux Basic and Premier Customers

    - by Michele Casey
    Over the years, customers migrating to Oracle Linux have asked for options to provide a transitional solution for their existing system management tools (such as Red Hat Satellite Server) while evaluating and planning migrations to Oracle's Enterprise Manager, which is offered at no additional charge with Oracle Linux Support Subscriptions.  Based on this request, we are pleased to announce support for the open-source community project, Spacewalk, which is the basis for both Red Hat Satellite Server and SUSE Manager.  Effective today, customers with Oracle Linux Basic and Premier Support subscriptions have access to a fully supported Spacewalk build which can be setup to easily manage Oracle Linux systems.   Spacewalk support for Oracle Linux requires Oracle Linux 6, x86_64 for the server and provides support for Oracle Linux 5 and Oracle Linux 6 (x86, x86_64) clients.  This solution requires Oracle Database 11g Release 2 as the  supported database repository for Spacewalk with Oracle Linux.  Within the next several weeks, a limited use license for the Oracle Database will be included with this offer.  Until this is complete, customers may use an existing Oracle database license or they may begin by downloading a 30-day trial license from eDelivery.  Customers with Oracle Linux Basic and Premier subscriptions will automatically have access to the channel hosting the supported build.  Please review the release notes for further instructions. Oracle Enterprise Manager is still the recommended enterprise solution for managing Oracle Linux systems and we want to provide the easiest transition path for our customers.  We are excited to offer this solution to our Oracle Linux customers while they plan and implement their migration to Oracle Enterprise Manager. 

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • Antenna Aligner Part 6: Little Robots

    - by Chris George
    A week ago I took temporary ownership of a HTC Desire S so that I could start testing my app under Android. Support for Android was not in my original plan, but when Nomad added support for it recently, I starting thinking why not! So with some trepidation, I clicked the Build for Android button on the Nomad toolbar... nothing. Hmm... that's not right, I was expecting something to build. After a bit of faffing around I finally realised that I hadn't read the text on the Android setup page properly (yes that's right, RTFM!), and I needed a two-part application identifier, separated by a dot. I did this (not sure what the two part thing is all about, that one my list to investigate!) After making the change, the Android build worked and created the apk file. I uploaded this to the device and nervously ran it... it worked!!!  Well, more or less! So, there was not splash screen, but this was no surprise because I only have the iOS icons and splash screen in my project at the moment. What was more concerning was the compass update didn't seem to be working. I suspect this is a result of using an iOS specific option in the Phonegap compass watcher. Another thing to investigate. I've also just noticed that the css gradient background hasn't worked either... These issues aside, it was actually more successful than I was expecting, so happy days! Right, lets get Googling...   Next time: Preparing for submission to the App Store! :-)

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • How to view/mount other partitions on your hard drive

    - by Preston Zacharias
    Recently I have installed Ubuntu 12.04 Beta 2 on a USB flash drive and decided to install it on an old external HDD which I have taken out of the casing and succesfully mounted in my desktop computer. There is no other operating system besides the newly install Ubuntu. However, there is about 500gb of data on the drive. This is why i used a partitioning software on my windows 7 netbook to partition the hard drive to set aside 1tb for files, 350gb of space for linux and the remaining 650gb for Vista which i plan on installing soon. But this is where the problem sets in...when installing Ubuntu it does not recognize that the drive is partitioned at all, it's just one big open block of space...so I used the installers built in partitioning feature to set aside 300gb for main Ubuntu install and 50gb for swap space. I set both of these partitions to be created at the "end" so that it wouldn't delete or write over my data. And this is where i am really lost; when booting into Ubuntu i am able to use it perfectly fine, got on internet, etc...but i have NO CLUE as to how i can view files that were previously on the drive (all of my data that i had prior to install). How can I mount/be able to view the other partition so that i can have access to my data? Thank you ahead of time! I REALLY appreciate any help or advice! ~Preston

    Read the article

  • JavaScript loaded external content SEO

    - by user005569871
    I wonder what is the best way to have Javascript loaded content indexed by search engines. I know that search engines don't execute Javascript, but I am thinking more of an progressive enchantment. I am creating a responsive website, and on the home page I will have some sections about most visited products and recommended product that I plan to load depending on the device detected. These products will be in sliders with thumbnail images and names of the products. If mobile is detected slider content will not load, ant the link to the external page will be shown. I know that external content will be indexed via link to those resources. Where will the users be directed from search in this case? To the external page or home page? Will it be bad for SEO if I show only product names on front page so they can be indexed and hide them with CSS? What is the best way to index that content and possibly direct users from search to home page? Also, i've seen the Ajax crawling but iI would like not to use that if there is any better way.

    Read the article

  • Compiling GCC or Clang for thumb drive on OSX

    - by user105524
    I have a mac book that I don't have admin rights to which I would like to be able to use either GCC or clang. Since I lack admin right I can't install binutils or a compiler to /usr directory. My plan is to install both of these (using an old macbook that I do have admin rights for) to a flash drive and then run the compiler off of there. How would one go building gcc or clang so that it could run just off of a thumb drive? I've tried both but haven't had any success. I've tried doing it defining as many of the directories as possible through configure, but haven't been able to successfully build. My current configure script for gcc-4.8.1 is (where USB20D is the thumb drive): ../gcc-4.8.1/configure --prefix=/Volumes/USB20FD/usr \ --with-local-prefix=/Volumes/USB20FD/usr/local \ --with-native-system-header-dir=/Volumes/USB20FD/usr/include \ --with-as=/Volumes/USB20FD/usr/bin/as \ --enable-languages=c,c++,fortran\ --with-ld=/Volumes/USB20FD/usr/bin/ld \ --with-build-time-tools=/Volumes/USB20FD/usr/bin \ AR=/Volumes/USB20FD/usr/bin/ar \ AS=/Volumes/USB20FD/usr/bin/as \ RANLIB=/Volumes/USB20FD/usr/bin/ranlib \ LD=/Volumes/USB20FD/usr/bin/ld \ NM=/Volumes/USB20FD/usr/bin/nm \ LIPO=/Volumes/USB20FD/usr/bin/lipo \ AR_FOR_TARGET=/Volumes/USB20FD/usr/bin/ar \ AS_FOR_TARGET=/Volumes/USB20FD/usr/bin/as \ RANLIB_FOR_TARGET=/Volumes/USB20FD/usr/bin/ranlib \ LD_FOR_TARGET=/Volumes/USB20FD/usr/bin/ld \ NM_FOR_TARGET=/Volumes/USB20FD/usr/bin/nm \ LIPO_FOR_TARGET=/Volumes/USB20FD/usr/bin/lipo CFLAGS=" -nodefaultlibs -nostdlib -B/Volumes/USB20FD/bin -isystem/Volumes/USB20FD/usr/include -static-libgcc -v -L/Volumes/USB20FD/usr/lib " \ LDFLAGS=" -Z -lc -nodefaultlibs -nostdlib -L/Volumes/USB20FD/usr/lib -lgcc -syslibroot /Volumes/USB20FD/usr/lib/crt1.10.6.o " Any obvious ideas of which of these options need to be turned on to install the appropriate files on the thumb drive during installation? What other magic occurs during xcode installation which isn't occurring here? Thanks for any suggestions

    Read the article

  • Inheritance vs composition in this example

    - by Gerenuk
    I'm wondering about the differences between inheritance and composition examined with concrete code relevant arguments. In particular my example was Inheritance: class Do: def do(self): self.doA() self.doB() def doA(self): pass def doB(self): pass class MyDo(Do): def doA(self): print("A") def doB(self): print("B") x=MyDo() vs Composition: class Do: def __init__(self, a, b): self.a=a self.b=b def do(self): self.a.do() self.b.do() x=Do(DoA(), DoB()) (Note for composition I'm missing code so it's not actually shorter) Can you name particular advantages of one or the other? I'm think of: composition is useful if you plan to reuse DoA() in another context inheritance seems easier; no additional references/variables/initialization method doA can access internal variable (be it a good or bad thing :) ) inheritance groups logic A and B together; even though you could equally introduce a grouped delegate object inheritance provides a preset class for the users; with composition you'd have to encapsule the initialization in a factory so that the user does have to assemble the logic and the skeleton ... Basically I'd like to examine the implications of inheritance vs composition. I heard often composition is prefered, but I'd like to understand that by example. Of course I can always start with one and refactor later to the other.

    Read the article

  • CodePlex learns to talk to other services!

    CodePlex is now able to talk to other services! For example, if you want CodePlex to tell Trello to update cards on your Trello board, it can do it. Or if you want CodePlex to notify your Campfire chat room when updates are pushed, it can do that too. To start off, we are going to be adding support for the following services: Campfire – Notify a Campfire chat room when commits occur HipChat – Notify a HipChat chat room when commits occur Trello – Add commit summaries to Trello cards by referencing those cards in commit messages Twitter – Notify your Twitter followers when updates are pushed to your project In addition, we will continue to support our existing integrations with Windows Azure – Continuously deploy to Windows Azure on pushes (For Git and Hg projects) AppHarbor – Continuously deploy to AppHarbor on pushes To set up these integrations for your project, navigate to the project settings page as a project coordinator, and click on the services section as seen below:   While we are starting with these six services, the infrastructure is now in place to allow us to quickly roll out new integrations. We would love to hear which services and integrations you would like to see most on our suggestions page. We realize that there are some services and URLs that only make sense for your project to send notifications to. To support this scenario, we plan to add generic web hooks in the near future. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @Rick_Marron.    

    Read the article

  • Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots into GIT. The end result being that the latest code is the same as the last snapshot, and other editions are accessible and are as numbered. Some other requirements: that the latest edition is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the latest edition of the code. meanwhile, there should be continuity between files that do persist between snapshots. I would like GIT to know that there are previous editions of these files and not treat them as brand new files within each edition. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in GIT as well as subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering. (I haved posted an answer separately for each tool so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same but separate question for Subversion: Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    Read the article

  • Most effective way to do daily standup meeting when a few people are remote

    - by Burhan Ali
    I am a software developer in a small team of seven. We are not an Agile (with a big 'A') team but are experimenting with some aspects of agile. One of these is the daily "standup" meeting. The difficulty here is that for two days of the week we have at least one person working from home so the full team isn't available in the same room. What is the best way to carry out a daily standup in this situation? Some facts that may be relevant: We all work in a single open plan room. We use Skype in our company. We don't have any video conferencing capability. We all work the same hours so there are no timezone complexities involved. The development manager is one of the people who works from home one day a week. Things we have tried: Conference call using Skype: This is tricky for those in the office because you can hear people speak in the room and then a split second later through the headset. This can e very distracting. Conference phone: Awful experience. Hard to get them to work and poor quality audio. Text-based updates using Skype. This is not as engaging and is no different than just firing off a status email in the morning. I have seen other questions about remote collaboration but they are mainly about completely remote teams and/or teams that span multiple time zones. We are not affected by either of these problems. What can we do to make our standup meetings better in these circumstances?

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time? I say that I would consider the "Will you always deploy to Windows?" the single most important (EDIT: technical) decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications? Edit: Mark H summarized it as: "If the claim is that "I have a windows application written in .NET, it should run on mono", then not, it's not a valid claim - but Mono has made efforts to make porting such applications simpler.".

    Read the article

  • How to read default key value with dconf or gsettings?

    - by Zta
    I would like to know the default value of a dconf/gsettings key. My question is a followup of the question below: Where can I get a list of SCHEMA / PATH / KEY to use with gsettings? What I'm trying to do, so create a script that reads all my personal preferences so I can back them up and restore them. I plan to iterate though all keys, like the script above, see what keys have been changed from their default value, and make a note of these, that can be restored later. I see that the dconf-editor display the keys' default value, but I'd very much like to script this. Also, I don't see how parsing the schemas /usr/share/glib-2.0/schemas/ can be automated. Maybe someone can help? gsettings get-default|list-defaults would be nice =) (Geesh, it was much easier in the old days where you just kept your ~/.somethingrc in subversion ... =\ Based on the answer given below, I've updated the script to print schema, key, key's data type, default value, and actual value: #!/bin/bash for schema in $(gsettings list-schemas | sort); do for key in $(gsettings list-keys $schema | sort); do type="$(gsettings range $schema $key | tr "\n" " ")" default="$(XDG_CONFIG_HOME=/tmp/ gsettings get $schema $key | tr "\n" " ")" value="$(gsettings get $schema $key | tr "\n" " ")" echo "$schema :: $key :: $type :: $default :: $value" done done This workaround basically covers what I need. I'll continue working on the backup scrip from here.

    Read the article

  • Is using the student version of 3DS Max and Unity3d legal?

    - by SubZeron
    I am developing an indie game together with my friend using Unity3D engine. I bought "Silo 3D" for modeling two month ago and for texturing I use 3D coat. We plan to sell our game in the future. For the animations I work with 3DS max (only animation part). My question is, can I work with a students license? The license for the original version is too expensive for me. I am still at the university and I can not buy the 3DS Max license which costs 4000 €. As an alternative I have the choice beetween Blender (can´t work with this software and don't have time to invest for learning a new program) and Truespace (can´t export fbx animation and specially with bones) so for me, 3DS Max is the best choice to be effective and quick. Is it possible to prove it when I export my fbx characters from 3DS Max to Unity3D? I mean can they find out that I have used the students license of 3DS Max for the animations after the release of the game? Maybe with help of DRM? Can I solve that problem when I export the fbx from 3DS Max to Blender and after that export the same fbx to Unity3D?

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Consolidating multiple domain names

    - by Mike
    I have a client that has three separately hosted copies of their website, each on a separate domain name. The websites are all essentially the same, bar a few discrepancies caused by badly managed updates in the past. I will soon be launching a completely new website for them, at which point, all three domain names are to resolve to the same web server. One domain name will become the default domain name that they refer to in all their literature, and the other two will simply be used as catch-alls for old links, bookmarks, and so on. I would like to know what people consider the best route to achieve this. My plan so far is: Get the new site up and running on the new webserver. Change the relevant A record of the default domain name to point to the new webserver. a) Keep the existing hosting accounts in operation. Create a list of 301 redirects from old page names on the old site to new page names on the new site. or b) Configure CNAME records for the non-default domain names, each pointing to the new webserver. Create a list of 301 redirects on the new site that redirect from old page names to new page names. If my understanding is correct, 3a will help to maintain whatever search engine rankings the sites already have (I know it's not going to be perfect), while at the same time informing search engines that the old domain names are no longer in use. What's a good approach to take here?

    Read the article

  • Implementing Circle Physics in Java

    - by Shijima
    I am working on a simple physics based game where 2 balls bounce off each other. I am following a tutorial, 2-Dimensional Elastic Collisions Without Trigonometry, for the collision reactions. I am using Vector2 from the LIBGDX library to handle vectors. I am a bit confused on how to implement step 6 in Java from the tutorial. Below is my current code, please note that the code strictly follows the tutorial and there are redundant pieces of code which I plan to refactor later. Note: refrences to this refer to ball 1, and ball refers to ball 2. /* * Step 1 * * Find the Normal, Unit Normal and Unit Tangential vectors */ Vector2 n = new Vector2(this.position[0] - ball.position[0], this.position[1] - ball.position[1]); Vector2 un = n.normalize(); Vector2 ut = new Vector2(-un.y, un.x); /* * Step 2 * * Create the initial (before collision) velocity vectors */ Vector2 v1 = this.velocity; Vector2 v2 = ball.velocity; /* * Step 3 * * Resolve the velocity vectors into normal and tangential components */ float v1n = un.dot(v1); float v1t = ut.dot(v1); float v2n = un.dot(v2); float v2t = ut.dot(v2); /* * Step 4 * * Find the new tangential Velocities after collision */ float v1tPrime = v1t; float v2tPrime = v2t; /* * Step 5 * * Find the new normal velocities */ float v1nPrime = v1n * (this.mass - ball.mass) + (2 * ball.mass * v2n) / (this.mass + ball.mass); float v2nPrime = v2n * (ball.mass - this.mass) + (2 * this.mass * v1n) / (this.mass + ball.mass); /* * Step 6 * * Convert the scalar normal and tangential velocities into vectors??? */

    Read the article

  • SQL Server DBA - How to get a good one!

    - by ETFairfax
    I'm a lone developer. I am currently developing an application which is seeing me get way way way out of my depth when it comes to SQL DBA'ing, and have come to realise that I should hire a DBA to help me (which has full support from the company). Problem is - who? This SO thread sees someone hire a DBA only to realise that they will probably cause more harm then good! Also, I have just had a bad experience with a ASP.NET/C# contractor that has let us down. So, can anyone out there on SO either... a) Offer their services. b) Forward me onto someone that could help. c) Give some tips on vetting a DBA. I know this isn't a recruitment site, so maybe some good answers for c) would be a benefit for other readers!! BTW: The database is SQL Server 2008. I'm running into performance issues (mainly timeouts) which I think would be sorted out by some proper indexing. I would also need the DBA to provide some sort of maintenance plan, and to review how our database will deal what we intend at throwing at it in the future!

    Read the article

  • m2e-wtp proposal to move into an Eclipse project

    - by gstachni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am attending EclipseCon 2012 in Reston VA this week and had a chance to attend Jason van Zyl's session on M2E. One announcement during the presentation was the plan to move m2e-wtp plugin into the m2e project at Eclipse. The project proposal was posted yesterday afternoon for those who are interested - http://www.eclipse.org/proposals/technology.m2e.m2e-wtp/. Jason said that m2e-wtp will not make it into the m2e Juno release but they expect the code to be transitioned over to Eclipse over the next six months. Getting the maven and wtp project models to play nicely together has been an issue for quite some time for our users. Hopefully this is a good step toward resolving those issues.

    Read the article

  • Fresh install on SSD with Ubuntu and Windows Vista, using whole disk encryption for Ubuntu

    - by nategator
    I would like to do a fresh install on a OCZ Vertex Plus R2 SSD 60GB drive I purchased on the cheap. Since the AES-encryption looks like it may not work optimally for this drive, I would like to set up a dual-boot to Windows Vista (the only Windows copy I have for clean install purposes) and Ubuntu 12.04 with the best encryption scheme possible. My plan is to have Windows around just in case I need to use a program that won't work with Wine and Ubuntu as my daily OS with all of my information secured in case the laptop is ever stolen or sold. Although this setup will not provide a lot of space, I think I can squeeze both OSes and have enough for second-computer office tasks. So, my questions are: Which OS should I install first, Ubuntu or Vista? Any special considerations when partitioning the drive? How should I install Ubuntu to ensure full disk encryption for the Linux partition(s) and or my daily computing? Is there a significant performance upgrade with doing a solo install of Ubuntu instead of a dual boot setup? Will TRIM, for example, work correctly? Are there any significant security concerns with going the route of a dual-boot, other than the fact that any activity on Windows may be fully recoverable if the drive is stolen or sold? Thanks in advance!

    Read the article

  • Website Hosting/Registration [closed]

    - by Ricko M
    Possible Duplicate: How to find web hosting that meets my requirements? I am planning to launch a website down soon. I wanted to know what solutions are available for hosting and registration. Starting with domain registration. Any site you have used/preferred ? I am considering either godaddy or 123reg. Does it even make any difference which you choose? Is there any fine print i need to worry about. I am based in UK , not sure if that helps in resolving any issues if encountered. Does my hosting need to be done at the site i purchased my registration? If not , will there be any transfer fees if i change my hosting? Can I just register the name now and worry about hosting later? At the moment, I plan to have it up and running using either some sort of a tool or a template and perhaps put the bells and whistles down the line. I understand 123 has its own builder tool available, There are a few solutions suggested like wordpress,drupal & jhoomla... I am a C++ developer , not a web programmer, but I do feel the need to open the hood up and make changes if i see fit. So I guess I am looking for a solution where I can easily drag-drop widgets I need and when the time comes customize it. Which CMS would you recommend. Extras: What extras do you need to get , I was suggested to get hold of whois privacy to keep the spambots away, anything else you guys would recommend I keep my eyes open before I sign the dotted line.

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >