Search Results

Search found 3749 results on 150 pages for 'updating'.

Page 82/150 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Logic / Render phases with a single thread

    - by DevilWithin
    The question I have may generate different opinions from different developers, but I'd still like to have an answer on this. Its all about the updating and rendering steps of the game loop, and their use under multi and single threaded environments. Currently, there is one thread running, which takes care of sequentially executing events , logic and rendering. Sometimes, the logic part may wish to change the game state to something else, and in between do some loading of files. The result is that the game hangs completely while loading, and then proceeds to normal rendering of the new state. To go around this, i could make another thread, do the loading there while the main thread renders a smooth loading animation, and then proceed normally. The real question is about if i don't create another thread. I could refresh the screen from the logic thread, and provide some basic loading screen, which could be not so smoothly updated while the files load. In fact, this approach is not loved by a lot of developers, as it scrambles render code in the logic step, which may cause problems of different sorts.. Hope its clear!

    Read the article

  • loading splash screen takes priority over terminal or windows manager while running elsa

    - by schonjones
    I recently installed e17 and was trying to set up defaults to use elsa and ecomorph over the standard compiz as it constantly crashes since updating to 12.04. If elsa is installed the loading screen hangs and never loads to login, however i can get to a terminal or the e17 login instead of the standard gdm that usually shows up, within a second the screen goes back to the loading screen. I can still type and login as well as run commands in the terminal, but all I see is the loading screen. Switching between terminals i can confirm my commands before it switches back to the loading screen. If i remove elsa the loading screen hangs, but I can get to a terminal login and run lightdm to start my session with no problems. I have multiple DE installed and am unsure which loading screen is coming up. i think it's the KDE screen, grub comes up with a debian background if that helps. I'm not sure if i can switch the loading screen and resolve this issue or if i'm just going to have to scrap using elsa and get lightdm to load on boot again. Elsa would be my preference. I don't have the space to backup my files for a complete reinstall. Please help!

    Read the article

  • Stylecop 4.7.37.0 has been released

    - by TATWORTH
    Stylecop  4.7.37.0 has been released at http://stylecop.codeplex.com/releases/view/79972The release notes follow:Add docs for new SA1650 spelling rule.Fix for 7395. Dont remove parenthesis around await expressions.Insert a returns element into docs within a see element.Update our tools folder StyleCop dll'sfix for 7392. Insert generic type docs for return types correctly.Fix for 7393. Allow documentation elements with attributes to end the string and still be valid.Make sure the MSBuild Task logs the warning id and type of exception. Unless the description field holds all this info VS cannot show the text in the Error List.Load custom dictionaries for multiple cultures. For a culture like en-GB; we load CustomDictionary.xml, then look for CustomDictionary.en-GB.xml and then CustomDictionary.en.xmlUpdate standard shipping dictionaries.Element documentation spelling fixes.Reduce the standard dictionaryUpdate our own devbuild StyleCop checks.Don't check spelling of xml documentation attributes are anything inside  <c> or <code> elements.Update StylingStyling update.Add timestamps for all the dependant files into the StyleCopResults.cache. Add a FileSystemWatcher to all custom dictionary files.Write out the full violation into the StyleCopResults.cache.Change a rules description text.Styling fixes.Styling fixes.NEW RULE: Check Spelling Of Element Documetation. Fix over 2000 spelling errors in our source code. Update the VS addin to show the rule violation in more detail. Add spelling checker to the deployment.Set our own Culture to en-USDocumentation spelling fixes.First draft of the documentation spelling checker.Fix for 7325. Don't throw 1126 in goto statements.Fix for 7090. Add TargetsDir to registry during install.Fix for 7060. Sort usings after moving them inside namespace.Fix FxCop issues.Fix for 7389. Detect CpuCount on Unix/MACFix for 6788. Allow opening curly brackets for scope. Added new tests.Updating constants.Fix for 7167. Show version number of StyleCop in VS Help window.Only output StyleCop excluded files if there are any.

    Read the article

  • the correct way to deal with gtk_events_pending and gtk_main_iteration

    - by abd alsalam
    I have program that send files and i want to make a progress bar for it, but that progress bar just updated after the transferring complete,so i putted a gtk_events_pending() and gtk_main_iteration() functions in the sending loop to go back to the gtk main loop to update the progress bar but also it seems to not work here is a EDIT: the send function is in a separated thread snippet from my code float Percent = 0.0 ; float Interval = 0.0 ; the sending function gint SendTheFile ( ) { char FileBlockBuffer[512]; bzero(FileBlockBuffer, 512); int FileBlockSize ; FILE * FilePointer ; int filesize = 0 ; FilePointer = fopen(LocalFileName , "r"); struct stat st; stat(LocalFileName, &st); filesize = st.st_size; Interval = (512 / (float)filesize) ; while((FileBlockSize = fread(FileBlockBuffer,sizeof(char),512,FilePointer))>0) { send(SocketDiscriptor , FileBlockBuffer , FileBlockSize,0); bzero(FileBlockBuffer, 512); Percent = Percent + Interval ; if (Percent > 1.0)Percent = 0.0; while(gtk_events_pending() ) { gtk_main_iteration(); } } update progress bar function gint UpdateProgressBar(gpointer data) { gtk_progress_bar_set_fraction(GTK_PROGRESS_BAR(data),Percent); } updating progress bar in the main function g_timeout_add(50,(GSourceFunc)UpdateProgressBar,SendFileProgressBar);

    Read the article

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • Docker vs ESXi for Startup Projects - Deploying Code for Dev Testing

    - by JasonG
    Why hello there little programmer dude! I have a question for you and all of your experience and knowledge. I have an ESXi whitebox that I built which is an 8 dude that sits in the corner. I made a mistake recently and took the key that had ESXi, formatted it and used it for something else. No big deal because the last project I worked on had stalled out. I'm about to pick up another project and now I need to spin up a whole bunch of stuff for CI, qa + db, ticket tracker, wikis etc etc. I've been hearing a lot about Docker recently and as this is just a consumer grade machine, I'm wondering if it may make more sense for me to use Docker on OpenOS and then put everything there - bamboo or hudson, jira, confluence, postgress for the tools to use, then a qa env. I can't really seem to find any documents that directly compare traditional VM infrastructure vs docker solutions and I'm wondering if it is fair to compare. Is there any reason why CoreOS w/ containers would be a strictly worse solution? Or do you have any insight into why I may want to stick with ESXi? I've looked on multiple occasions and can't find a good reason not to. I'm not going to run a production env on the server so I don't need to have HA if updating security or OS for example where esxi would allow me to restart one vm at a time. I can just shut the thing down and bring it back up if I need a reboot no problem. So what's up with this container stuff? Is it a fair replacement for ESXi? I'm guessing the atlassian products would run much better and my ram would go a lot farther using docker. Probably the CPU would run much cooler too and my expensive HDD space would be better utilized.

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Are you ready to take a walk in the clouds?

    - by Steve Loethen
    Cloud computing is here, whether we want it or not.  When I say "a walk in the clouds” I am not talking about a pleasant romantic comedy, but a real alternative to hosting applications on-premise.  For years we have had the power to host our web sites on remote systems.  Sure, challenges existed.  Mostly web sites.  I could, with a few clicks, create a account at a myriad of web host sites, put my site in the hands of a remote hosting company, and boom, I was a site on the internet.  But choices, power, and management was limited. Now, we have a set of services to let us approach and power and control we love, but with scalability of the data center.  My personal web site is hosted on a laptop running hyperV in my basement.  I have to manage the machine, patch it, make sure it is powered up.  This is fine for the “hello, this is my dog skippy site” that I maintain. If the football pool I run has an issue, one of the 10 users I have calls or emails me and I go check it out.  All is well. But this falls well below the needs of even the simplest of enterprises.  A business needs a stronger datacenter, a better pipe to the world.  Do I really want to base my business on a dynamic dns and a dsl line from the local phone company? Cloud computing gives us most of what I value (control, a db of my own, updating my site from Visual Studio). Come learn how this technology can transform your business.  If you are a Microsoft shop, or are interested in Microsoft in the cloud, on April 8 and 9, a 2 day free Azure training class is being conducted in Kansas City.  http://www.azurebootcamp.com/city/kansascity Hope to see you there.  If you come, make sure you look me up.

    Read the article

  • ALPS touchpad stops working after reboot

    - by user58289
    I recently upgraded to 12.04 LTS. My Compaq Presario CQ-40 324la touchpad worked after first restarting after installation. But after restarting a second time, the touchpad is completely disabled, without having changed anything in the system. I've tried solutions but haven't had good results. The applications I've installed (fro "solutions" are: Pointing Devices Synaptiks Dconf-Tools I've also tried updating GRUB, adding this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.nomux") I tried these commands in a terminal. Neither has worked. sudo modprobe -r psmouse sudo modprobe psmouse proto=imps El Touchpad ALPS se desactiva al reiniciar Recién he actualizado a Ubuntu 12.04LTS y a mi parecer ha mejorado mucho con respecto a 11.10... Pero ese no es mi punto ahora. El Touchpad funcionó correctamente luego del primer reinicio después de la instalación. Pero luego de volver a reiniciar, el Touchpad queda totalmente deshabilitado sin haber hecho cambio alguno en el sistema. He probado las soluciones en su mayoría y no he tenido buenos resultados. Las aplicaciones que he instalado (de las "soluciones") son: -Dispositivos apuntadores. -Synaptiks -DConf-Tools Y he intentado también actualizando el grub (añadiéndole una linea así: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.nomux") Intenté en la terminal con el comando: sudo modprobe -r psmouse sudo modprobe psmouse proto=imps Ninguno me ha resultado bien. La laptop es Compaq Presario CQ-40 324la

    Read the article

  • Secunia Personal Software Inspector (PSI) 2.0

    - by TATWORTH
    Secunia Personal Software Inspector is now available in a updated version that is free for personnal use. The home page says "The Secunia PSI is aFREE security tool designed to detectvulnerable andout-dated programs and plug-ins which expose your PC to attacks. Attacks exploiting vulnerable programs and plug-ins are rarely blocked by traditional anti-virus and are therefore increasingly "popular" among criminals. The only solution to block these kind of attacks is to apply security updates, commonly referred to as patches. Patches are offered free-of-charge by most software vendors, however, finding all these patches is a tedious and time consuming task. Secunia PSI automates this and alerts you when your programs and plug-ins require updating to stay secure. Download the Secunia PSI now and secure your PC today - free-of-charge." I have used this for some time on my home PC and have found it to be very useful in identifying required updates. I use Google Chrome but I found that whenever a new version is issued, the old version is not de-installed. Secunia PSI helps me to locate them and get rid of them.

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • A Fresh Start

    - by Laila
    As you may already be aware, I'm no longer responsible for the .NET Reflector newsletter. That publication is now in the very capable hands of the Reflector team. But fear not; starting in early April, I'll be launching a brand new .NET Newsletter, and I invite you to enjoy the very first edition by subscribing to our new mailing list, or by updating your Simple-Talk subscriptions, and joining the .NET Newsletter mailing list. With a fresh and snappy design (it might even be described as idiosyncratic. but I can say no more at this stage), we'll be making a brand new start. Each month, a member of my team (that's the Red Gate .NET team) will host the .NET Newsletter, bringing you the choicest cuts of breaking news, the very best .NET content from Simple-Talk, alongside details of hot upcoming events. To top it off, not only will you be among the first to get access to free resources (including free wall-charts, training videos and eBooks), but you'll also get exclusive access to betas, early access programs, and special offers. We can't wait to share the new design and exciting new content with you! If you have any questions about the changes to the newsletter, please feel free to send an email to [email protected] or post a comment on my blog. If I don't hear from you before next month, then I'll simply say that I hope you enjoy the new look. Cheers, Laila

    Read the article

  • eSTEP TechCast - December 2012

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 6th of December and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 06. December 2012, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: Innovations with Oracle Solaris Cluster 4 Abstract:Oracle Solaris Cluster 4.0 is the version of Solaris Cluster that runs with Oracle Solaris 11. In this webcast we will focus at the integration of the cluster software with the IPS packaging system of Solaris 11, which makes installing and updating the software much easier and much more reliable, especially with virtualization technologies involved. Our webcast will also reflect new versions of Oracle Solaris Cluster if they will be announced in the meantime. Target audience: Tech Presales Speaker: Hartmut Streppel Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 255 760 510Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • VMware Kernel Module Updater hangs on Ubuntu 13.04

    VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation. VMware Player or Workstation checks if modules for the active kernel are available. In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn't start as expected. When you launch VMware Player as super user (root) the dialog would stall like so: VMware Kernel Module Updater stalls while stopping the services Prior to version 5.x of VMware Player or version 7.x of VMware Workstation you would run a command like: $ sudo vmware-config.pl to resolve the module version conflict but this doesn't work anyway. Solution Instead, you have to execute the following line in a terminal or console window: $ sudo vmware-modconfig --console --install-all Those switches are (as of writing this article) not documented in the output of the --help switch. But VMware already documented this procedure in their knowledge base: VMware Workstation stops functioning after updating the kernel on a Linux host (1002411). Update As of today I had the first kernel upgrade to version 3.8.0-22 in Ubuntu 13.04. Don't even try it without vmware-modconfig...

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Ubuntu 12.04 and Nvidia GTX 550 Ti

    - by Jim
    OK I'm currently trying to install Ubuntu x64 Server 12.04 onto the following machine: Intel Core i5-2320 3.0GHz LGA1155 6MB 8 GB DDR3 RAM, Gigabyte Z68P-DS3 S1155 Intel Z68 DDR3 ATX M/B, OCZ 60 GB SSD, 3x Samsung 2TB drives in a RAID 5 array (via M/B) Now what I think is causing the issue is the following: EVGA GeForce GTX 550 Ti 951MHz 1GB PCI-Express HDMI FPB As the server CD works in text mode I haven't had a problem with actually installing Ubuntu. Partioned with: 1GB /boot SSD, 59GB / SSD, 10GB swap RAID5, ~4TB /home RAID5 On a straight boot, you briefly see the GRUB menu, followed by a blank screen. The keyboard and mouse blink as they are initialised but no sign of life from the screen. Followed by a bit of research (otherwise known as google) ... Booted in quiet splash nomodeset Now I have a fully working linux distro at the command prompt. I then proceed to try and update the nvidia drivers with apt-get (after updating repositories etc) and rebooting. Still the same problem. I also tried reinstalling from the CD and installing said drivers in the install process before GRUB was installed, still the same symptoms. Does anybody have any solutions? I'm at my wits end here, I bought this machine to be a linux server / tinkering machine and have just spent 4-5 hours trying to just get a basic install working.

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • Provide an OnChange event for an internal property which is controlled externally?

    - by NGLN
    For fun and by request I am updating this ImageGrid component, a kind of listbox for images that has a FileNames property of type TStrings. For ease of writing, I have been misusing its FileNames.Objects property for bitmap storage. But since the TStrings type suggests that users of the component could or would want to use the Objects property for custom data, e.g. like TListBox.Items, I am rewriting the component to store the bitmaps elsewhere and leave FileNames.Objects untouched for unknown future usage. Now I am wondering whether to provide an OnChange event. And if so, whether to fire it when one or more FileNames.Objects changes. Trying to answer it myself, I dove in Delphi's own VCL and stumbled on: TMemo: has an OnChange event, but ignores Lines.Objects TListBox: has no OnChange event, but is capable of storing Items.Objects TStringGrid: has no OnChange event, but is capable of storing Objects, Rows.Objects, Cols.Objects So now I am somewhat puzzeled, because I cannot imagine Borland's developers didn't add events for several Objects properties out of ease. Sure, when a user changes a FileNames.Object in my component, he knows he does and could implement appropriate interaction himself. But wouldn't it be convenient when the component does automatically? What would you expect from this component in this regard?

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • Kubuntu 11.10 very slow during file I/O

    - by dko
    After updating to Kubuntu 11.10, my file I/O performance has slowly just gotten worse and worse. It is to the point where I'm getting 1 MB/s write/read speeds to the drive. If I download something, the whole machine becomes unresponsive for at times up to 30 seconds. This usually causes a timeout in the download and the download then stops. Even extracting archive files, while extracting the computer is just unusable on top of the terrible read/write speeds. It isn't the drive as I have Windows installed as well and when I boot to it I have no issues with the drive. I did not have this issue using Kubuntu 11.04 and am thinking of downgrading. However, I'd much rather help out the Ubuntu community by working through these issues. I'm starting to lean towards the new Linux Kernel is just not working well with file handles. During file I/O my system usage does pick up, but it is not 100% CPU usage. My system is as follows. Samsung 2 TB hard disk drive AMD Phenom II x6 1055 4 GB RAM (only one in use according to system monitor) ATI 5850 HD

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >