Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 304/606 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • What is a good way to store tilemap data?

    - by Stephen Tierney
    I'm developing a 2D platformer with some uni friends. We've based it upon the XNA Platformer Starter Kit which uses .txt files to store the tile map. While this is simple it does not give us enough control and flexibility with level design. Some examples: for multiple layers of content multiple files are required, each object is fixed onto the grid, doesn't allow for rotation of objects, limited number of characters etc. So I'm doing some research into how to store the level data and map file. This concerns only the file system storage of the tile maps, not the data structure to be used by the game while it is running. The tile map is loaded into a 2D array, so this question is about which source to fill the array from. Reasoning for DB: From my perspective I see less redundancy of data using a database to store the tile data. Tiles in the same x,y position with the same characteristics can be reused from level to level. It seems like it would simple enough to write a method to retrieve all the tiles that are used in a particular level from the database. Reasoning for JSON/XML: Visually editable files, changes can be tracked via SVN a lot easier. But there is repeated content. Do either have any drawbacks (load times, access times, memory etc) compared to the other? And what is commonly used in the industry? Currently the file looks like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### 1 - Player start point, X - Level Exit, . - Empty space, # - Platform, G - Gem

    Read the article

  • Can't adjust backlight on an Nvidia 335m GT

    - by Vladimir
    I have a laptop mySN QMG6 / Chiligreen Mobilitas NW which is Quanta TW9 barebone with intel i3 and nvidia 335m GT onboard. On ubuntu distros 10.04, 10.10, 11.04 and 11.10 i had problem with changing screen backlight with nouveau and nvidia drivers. FN+F4/F5 buttons did not change my brightness. I tried to edit xorg.conf, adding Option “RegistryDwords” “EnableBrightnessControl=1? Also tried to add some lines to grub acpi_osi="Linux" acpi_backlight=vendor Neither worked for me. Today I installed Ubuntu 12.04 beta2 and... With nouveau driver my FN key works, and changes the brightness (is it a new 3.0.22 linux kernel, or patched nouveau driver, i don't know). This is a big step forward. But, when installing proprietary nvidia driver (295.33) FN button stops working and i can't change brightness. I also tried workaround with xorg and grub with no result. Tried to install acpi from apt - no result. Is there anything left to try? I really need that nvidia driver working with FN keys, as i would like to have a working 3D acceleration. P.S. Does the nouveau driver has 3d acceleration like nvidia drivers??? If there is need to provide some log data, please write what should i print, as i'm a bit new to Ubuntu. P.P.S. Same problems i had with other Linux distros (Mint, Fedora and others) P.P.P.S. Other FN buttons work with both drivers (Mute, VOL UP/DOWN, WiFi on/off, Bluetooth, Sleep, Start/Pause, Stop, Next/Prev song) Some new thoughts... CONFIG_BACKLIGHT_GENERIC=m could this be an issue? Made this by grep BACKLIGHT /boot/config-3.2.0-22-generic-pae Full grep output can be viewed here: http://pastebin.com/sMRd2Z4k

    Read the article

  • How will technological singularity affect programmers?

    - by Amir Rezaei
    I'm one of the believers that think that we will hit the technological singularity sooner or later. Then the question is if any profession will be unaffected by changes that will come. In the end it will be we programmers that will implement the first self-aware AI. How will technological singularity affect us programmer? What is your professional opinion regarding technological singularity? EDIT: By self-aware I refer to an entity that questions and seek answers, able to analyze and solve problem. Artificial neural network is branch in mathematics/statistics with many widely used algorithms. The algorithms are applied where recognition of data is needed. For example hidden Markov model is used for voice recognition. Another well-known area is business intelligence and data mining. Today algorithms are self-learning. That is a bit of AI what many never think of. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Link to Ref.

    Read the article

  • How can I have Windows 8 go to the desktop by default?

    - by Schnapple
    I've played around a smidge with the Windows 8 Consumer Preview in a VirtualBox VM and I think the improvements under the hood may be worth tolerating the Metro UI crap. I don't like that the entire screen changes to something I don't care about when I hit the Windows key and start typing but I can deal with it. The one thing I cannot stand, though, is that it starts in the Metro interface by default. I have to hit the "Desktop" tile to get to a normal interface, and while I can hit "escape" to go back to the desktop and dismiss Metro, it doesn't work when you first log in. When you first log in, you have to hit that "Desktop" tile. I know that the Enterprise versions of Windows 8 will go to the desktop by default. I don't know but I would assume that there's probably some registry key that would handle this. Has anyone figured that out yet?

    Read the article

  • Revision / backup software for folder?

    - by Gabriel
    I'm looking for a simple backup/revision software that will monitor a folder for new changes and make a backup of newly modified files? I have a folder with some text files, a few Word/Excel files and I'd like to keep backup files of them when they're modified. (It's no more than 50 mb). I'd like what Dropbox does but just locally (not to be stored on the cloud). Thank you. Edit: I'm on Windows XP. And I'm looking for a freeware app, if possible.

    Read the article

  • SQLAuthority News – Presenting at Virtual Tech Days TechEd Pre-Con – February 9, 2011

    - by pinaldave
    I will be presenting on following subject on Virtual Tech Days TechEd Pre-Con – February 9, 2011. Auditing Made Easy: Change Tracking and Change Data Capture Date and Time: February 9, 2011 11:45am-12:45pm Location: Online In this fast paced demo oriented session we will go over few of concept which are related to real life problem at customers. We often see developers and DBA looking for details like who has dropped the table, who has last modified any object as well what was actually modified. SQL Server 2008 has all the answers. It has various new methods for Auditing where not only you can know details about what was changed as well know who changed it as well. In addition to that we can capture way more details configuring Auditing. We can also work prevent changes if proper policy management is configured. If you have ever attended my session on this subject earlier, this is going to absolutely new session and very much demo oriented. There is going to be quiz at the end of the session and I promise that if you attend the session, you will get all the answers correct. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How to make HOME or END keys work in mc running on OS X (ssh)

    - by Sorin Sbarnea
    I installed MacPorts on OS X 10.5 and I found out that when I connect to the computer using SSH and use mc - Midnight Commander - the HOME and END keys do not work. I have to mention that I'm using putty and I am able to use the keyboard very well on Linux machines like Fedora, Ubuntu,... Here is putty keyboard configuration (a configuration I found to be optimal over time): Backspace key: 127 Home/End keys: Standard Function keys: Xterm R6 Cursor keys: Normal Numpad: normal Terminal type string: xterm-color I'm looking for a command line solution/script that does these changes, this make much easier to create a prepare OS script for configuring a new OS.

    Read the article

  • Silverlight MEF – Download On Demand

    - by PeterTweed
    Take the Slalom Challenge at www.slalomchallenge.com! A common challenge with building complex applications in Silverlight is the initial download size of the xap file.  MEF enables us to build composable applications that allows us to build complex composite applications.  Wouldn’t it be great if we had a mechanism to spilt out components into different Silverlight applications in separate xap files and download the separate xap file only if needed?   MEF gives us the ability to do this.  This post will cover the basics needed to build such a composite application split between different silerlight applications and download the referenced silverlight application only when needed. Steps: 1.     Create a Silverlight 4 application 2.     Add references to the following assemblies: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 3.     Add a new Silverlight 4 application called ExternalSilverlightApplication to the solution that was created in step 1.  Ensure the new application is hosted in the web application for the solution and choose to not create a test page for the new application. 4.     Delete the App.xaml and MainPage.xaml files – they aren’t needed. 5.     Add references to the following assemblies in the ExternalSilverlightApplication project: System.ComponentModel.Composition.dll System.ComponentModel.Composition.Initialization.dll 6.     Ensure the two references above have their Copy Local values set to false.  As we will have these two assmblies in the original Silverlight application, we will have no need to include them in the built ExternalSilverlightApplication build. 7.     Add a new user control called LeftControl to the ExternalSilverlightApplication project. 8.     Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Beige" Margin="40" >         <Button Content="Left Content" Margin="30"></Button>     </Grid> 9.     Add the following statement to the top of the LeftControl.xaml.cs file using System.ComponentModel.Composition; 10.   Add the following attribute to the LeftControl class     [Export(typeof(LeftControl))]   This attribute tells MEF that the type LeftControl will be exported – i.e. made available for other applications to import and compose into the application. 11.   Add a new user control called RightControl to the ExternalSilverlightApplication project. 12.   Replace the LayoutRoot Grid with the following xaml:     <Grid x:Name="LayoutRoot" Background="Green" Margin="40"  >         <TextBlock Margin="40" Foreground="White" Text="Right Control" FontSize="16" VerticalAlignment="Center" HorizontalAlignment="Center" ></TextBlock>     </Grid> 13.   Add the following statement to the top of the RightControl.xaml.cs file using System.ComponentModel.Composition; 14.   Add the following attribute to the RightControl class     [Export(typeof(RightControl))] 15.   In your original Silverlight project add a reference to the ExternalSilverlightApplication project. 16.   Change the reference to the ExternalSilverlightApplication project to have it’s Copy Local value = false.  This will ensure that the referenced ExternalSilverlightApplication Silverlight application is not included in the original Silverlight application package when it it built.  The ExternalSilverlightApplication Silverlight application therefore has to be downloaded on demand by the original Silverlight application for it’s controls to be used. 1.     In your original Silverlight project add the following xaml to the LayoutRoot Grid in MainPage.xaml:         <Grid.RowDefinitions>             <RowDefinition Height="65*" />             <RowDefinition Height="235*" />         </Grid.RowDefinitions>         <Button Name="LoaderButton" Content="Download External Controls" Click="Button_Click"></Button>         <StackPanel Grid.Row="1" Orientation="Horizontal" HorizontalAlignment="Center" >             <Border Name="LeftContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>             <Border Name="RightContent" Background="Red" BorderBrush="Gray" CornerRadius="20"></Border>         </StackPanel>       The borders will hold the controls that will be downlaoded, imported and composed via MEF when the button is clicked. 2.     Add the following statement to the top of the MainPage.xaml.cs file using System.ComponentModel.Composition; 3.     Add the following properties to the MainPage class:         [Import(typeof(LeftControl))]         public LeftControl LeftUserControl { get; set; }         [Import(typeof(RightControl))]         public RightControl RightUserControl { get; set; }   This defines properties accepting LeftControl and RightControl types.  The attrributes are used to tell MEF the discovered type that should be applied to the property when composition occurs. 17.   Add the following event handler for the button click to the MainPage.xaml.cs file:         private void Button_Click(object sender, RoutedEventArgs e)         {                   DeploymentCatalog deploymentCatalog =     new DeploymentCatalog("ExternalSilverlightApplication.xap");                   CompositionHost.Initialize(deploymentCatalog);                   deploymentCatalog.DownloadCompleted += (s, i) =>                 {                     if (i.Error == null)                     {                         CompositionInitializer.SatisfyImports(this);                           LeftContent.Child = LeftUserControl;                         RightContent.Child = RightUserControl;                         LoaderButton.IsEnabled = false;                     }                 };                   deploymentCatalog.DownloadAsync();         } This is where the magic happens!  The deploymentCatalog object is pointed to the ExternalSilverlightApplication.xap file.  It is then associated with the CompositionHost initialization.  As the download will be asynchronous, an eventhandler is created for the DownloadCompleted event.  The deploymentCatalog object is then told to start the asynchronous download. The event handler that executes when the download is completed uses the CompositionInitializer.SatisfyImports() function to tell MEF to satisfy the Imports for the current class.  It is at this point that the LeftUserControl and RightUserControl properties are initialized with composed objects from the downloaded ExternalSilverlightApplication.xap package. 18.   Run the application click the Download External Controls button and see the controls defined in the ExternalSilverlightApplication application loaded into the original Silverlight application. Congratulations!  You have implemented download on demand capabilities for composite applications using the MEF DeploymentCatalog class.  You are now able to segment your applications into separate xap file for deployment.

    Read the article

  • Apple XRaid questions

    - by luckytaxi
    I inherited an environment with a couple of apple xraid san. 1 - I have a 14 drive setup that's split into 5 LUNs on EACH side. The SAN goes into a fibre switch along w/ the servers that are attached to it. LUN masking is enabled on the SAN and as far as I know, there aren't any zoning on the fibre switch. Question, I have a server that's assigned two LUNs, one from each side of the controller. For some reason, it only sees one LUN (from the upper controller) and it doesn't see the one from the lower controller. The controller seems to be working fine as I have other servers attached to LUNs on the lower controller. 2 - I see a little "disclaimer" saying that any changes to the xraid will result in a reboot. So, if I add/remove hosts, this thing is going to reboot?!?!?!

    Read the article

  • TechEd 2012: Day 3 &ndash; Build Me A Solution

    - by Tim Murphy
    While digesting my lunch it was time to digest some TFS Build information. While much of my time is spent wearing my developer’s hat I am still a jack of all trades and automated builds are an important aspect of any project.  Because of this I was looking forward to finding out what new features are available in the latest release of Team Foundation Server. The first feature that caught my attention is the TFS Admin Client.  After being used to dealing with NAnt in the past it is nice to see a build a configuration GUI that is so flexible and well thought out.  The bonus is that it the tools that are incorporated in Visual Studio 2012 are just as feature rich.  Life is good. Since automated builds are the hub of your development process in a continuous integration shop I was really interested in the process related options. The biggest value add that I noticed was merge gated check-ins.  Merge or batch gated check-ins are an interesting concept.  If the build breaks with all the changes then TFS will run separate builds for each of the check-ins.  This ability to identify the actual offending check-in can save a lot of time and gray hair. The safari of TFS Build that was this session was packed with attractions.  How do you set it up builds, what are the different flavors of builds, how does the system report how the build went?  I would suggest anyone who is responsible for build automation spend some serious time with TFS 2012 and VS2012. del.icio.us Tags: Team Foundation Server 2012,TFS,Build,TechEd,TechEd 2012,Visual Studio 2012

    Read the article

  • difference between compiled and installed via rpm (zypper)

    - by cherouvim
    In an openSUSE 11.1 I download, compile and install ImageMagick via: wget ftp://.../pub/graphics/ImageMagick/ImageMagick-6.7.7-0.zip unzip ImageMagick-6.7.7-0.zip cd ImageMagick-6.7.7-0 ./configure --prefix=/usr/local/ImageMagick make make install Everything works nicelly until I discover that JPG is not supported: identify -list format | grep -i jpg [nothing related to JPG returned] So I reconfigure and recompile using: ./configure --prefix=/usr/local/ImageMagick --with-jpeg=yes --with-jp2=yes make make install But that changes nothing. I end up uninstalling: make uninstall and installing via zypper: zypper install ImageMagick This installed version 6.4.3 and now it does support JPG: identify -list format | grep -i jpg JPG* JPEG rw- Joint Photographic Experts Group JFIF format Any idea on what is going on here? What is a possible reason that this capability of ImageMagick was not there when compiled from source but was there when installed from rpm? Note that I don't necessarily care a lot about ImageMagick (since it now works), but generally about his kind of behaviour, becase in one way or another I've seen this happen in other ocasions as well.

    Read the article

  • Ubuntu: The Movie

    - by CYREX
    Since Ubuntu is the most popular distribution and has made a lot of changes in many places around the globe and in different industries up to the point where even people that do not know what Linux is, they know what Ubuntu is (go figure? ) there might be a movie coming someday (like the social network for Facebook or Revolution OS for Linux/Red Hat) i wanted to know how it all came to be from the actual players in the show. UBUNTU: The Movie Since i have seen several of the primary characters of the movie here, this might be a good place to start on how it all came to be. Not in the traditional wikipedia way or the ubuntu help section, but in the what the actual developers have in mind on how it all went down to the point of having a huge amount of users, an incredible level sophistication in the forum, help sections, installers, etc.. This is just to have the KNOW HOW before the actual movie makes it out some day in the future. As a fan of Ubuntu this is a MOST KNOW! ;) Hope i made some people happy and some other shy hehe.

    Read the article

  • nginx + php-fpm “504 Gateway Time-out” after compiling with curl support

    - by Brian
    We recently switched to managing php with php-fpm. It was working great, but is now giving me issues. The most recent change was to install libcurl-devel and re-compile php (5.3.3) using --with-curl. Now I'm getting 504 timeouts with nginx and the pages won't load. HTML pages load fine, phpinfo() loads as well. Tried backing out the changes and re-compiling without curl support, but still not having any luck. Also tried adjusting request_terminate_timeout per some of the other posts here on SF without change. This is on a test machine that has no other clients hitting it. I also tried switching to unix socket instead of tcp--same result. What am I missing here? Am I barking up the wrong tree with curl?

    Read the article

  • Weirdness After Reinstall The Windows Operating System

    - by Eka Anggraini
    I want to ask, and ask advice from you guys. I have reinstall my OS and successful, I'm turn off the restart a few times is fine .. Later a few hours later, when I turn it on, a sudden there was an error : Windows could not start because the following file is missing or corrupt: \WINDOWS\SYSTEM32\CONFIG\SYSTEM You can attempt to repair this file by starting Windows Setup using the original Setup CD-ROM Select 'R' at the first screen to start repair. So I intend to repair, reinstall all again .. I enter cd windows : press any key .. with blue background text below: Setup is loading files (windows executive) Setup is loading files (hardware abstraction layer) Then I was waiting until half an hour, and no changes, I repeated the process several times is also nil. Please advice and solutions about the problem where? Hardware / cd Windows ?

    Read the article

  • How to start a task before networking?

    - by user1252434
    I've written an upstart task that modifies /etc/network/interfaces. (Actually a file sourced into it.) Which start on condition do I need to declare to let my task run before any networking jobs? I've tried start on starting networking, but that's apparently too late. When I log in after booting I can see that the changes were written, but obviously they are not used: the new config states a static IP, but the boot process waits for a non-existing DHCP server (old config) to time out. I've also tried start on starting network-interface INTERFACE=eth0, which didn't work either. IIRC there was an error in the log that the change couldn't be written. Background: I need a VM template that can be cloned and the clones configured through a script. Among other settings, I need to give them a static IP address to access them from the host. I use guestfish to write a config file to one of the virtual disks and let a script apply these settings to the system. I don't want that disk to contain an actual system settings file. I can't modify /etc directly, because that disk is shared (copy-on-write/diff) among the clones and guestfish apparently doesn't support that type of image. I could also let them use DHCP and setup a server that assigns IP by MAC, but I'm afraid of the complexity. I could also add just another virtual disk for configuration files, but if possible I'd prefer to store settings directly on the system disk image. Used software: Ubuntu Server 12.04, VirtualBox. The configuration modifier is a self written ruby script.

    Read the article

  • Windows Server 2003 R2 Standard: Locks MS Office files, but not Adobe .AI and .PSD files?

    - by Bruce Garlock
    We have some shares setup on a Windows 2003 R2 server, and the MS Office files people save behave properly: The first person to open the file gets read/write, and the second person to open the file while the first person still has the file open, gets a read-only version. This is not true for the graphics files, like Adobe Illustrator .AI files, and Photoshop .PSD files. Anyone who goes to open these files has full read/write, even if someone else is already working on the file! This has lead to numerous file corruption issues, as well as other lost work, since it always saves the last changes to the file. How do we get Windows to properly lock these files so when someone is working on a file, and someone else wants to open one, they get read-only access? Many thanks, Bruce

    Read the article

  • Where is the bottleneck?

    - by jsymon
    There is a limit on connections somewhere along the line here... On a windows server 2008 machine, each request to a url running on localhost takes ~3 seconds to complete. This is fine and normal for the url. However, if i open the same localhost url in about 10 tabs, and set them to reload all at the same time, they finish sequentially, 3 seconds after each other. Meaning the last tab has taken 30 seconds to load (3s x 10). What is especially odd is that firebug reports each page as taking 3seconds to load. Another point to add is that the status bar just sits at 'done' for the last tab until 3 seconds before completing, where it then changes to 'waiting for localhost'. I am praying there is some connection limit somewhere otherwise this would be a disaster if more than one user ever visited the site at a time! Maybe a limit or something where one pc cant make more than 2 simultaneous requests to a url at a given time?

    Read the article

  • Awstats messaging non existant user causing exim4 to go nuts

    - by Chris
    I've taken over managing a server set up by someone else now uncontactable, while managing to work out most faults / changes needed this one is stumping me. Awstats is running on the machine and sending messages via exim4 to a user every time it runs an update. The user account has been deleted and so the exim4 main log files are filling up with message delivery errors, which firstly hinders meaningful log analysis for anything else and secondly uses up quite a lot of space (it grew to 22GB unattended, panic!) I've been through all the conf files in /etc/awstats and can't seem to find any mention of this user account. Google just turns up results about how to use awstats to parse exim4 log files. So the questions is where is this setting (on debian) likely to be? Cheers in advance

    Read the article

  • Web workflow solution - how should I approach the design?

    - by Tom Pickles
    We've been tasked with creating a web based workflow tool to track change management. It has a single workflow with multiple synchronous tasks for the most part, but branch out at a point to tasks running in parallel which meet up later on. There will be all sorts of people using the application, and all of them will need to see their outstanding tasks for each change, but only theirs, not others. There will also be a high level group of people who oversee all changes, so need to see everything. They will need to see tasks which have not been done in the specified time, who's responsible etc. The data will be persisted to a SQL database. It'll all be put together using .Net. I've been trying to learn and implement OOP into my designs of late, but I'm wondering if this is moot in this instance as it may be better to have the business logic for this in stored procedures in the DB. I could use POCO's, a front end layer and a data access layer for the web application and just use it as a mechanism for CRUD actions on the DB, then use SP's fired in the DB to apply the business rules. On the other hand, I could use an object oriented design within the web app, but as the data in the app is state-less, is this a bad idea? I could try and model out the whole application into a class structure, implementing interfaces, base classes and all that good stuff. So I would create a change class, which contained a list of task classes/types, which defined each task, and implement an ITask interface etc. Put end-user types into the tasks to identify who should be doing what task. Then apply all the business logic in the respective class methods etc. What approach do you guys think I should be using for this solution?

    Read the article

  • Bootstrap a debian build environment and build source packages with no root privileges

    - by Erwan Queffélec
    On debian squeeze, I am trying to do the following : fetch sources package from the wheezy source repository bootstrap a squeeze chroot for several architectures build the packages for several architectures (i386, amd64 + all and any) I want both the fetching, bootstrapping and build operation to be scriptable, repeatable, and run as a normal user. For the environment setup, I want to make as little use of the root account as possible (install the necessary dependencies, and maybe some visudo stuff). If possible I would like to avoid using a VM (pbuilder with user mode linux) So far I have tried several things with pbuilder (require root), debootstrap (require root) with little success. Here is an example script of what I want to do (does not work): #/bin/bash set -e set -x this=`readlink -f $0` this_dir=`dirname $this` archs='i386 amd64 any all' pushd $this_dir/src # I actually want the following line to work with a repo that # is not in /etc/apt/sources.list but that is another question apt-get source cyrus-imapd-2.4 popd for arch in $archs do build_dir=$this_dir/build/$arch/ pbuilder --create --configfile $build_dir/pbuilderrc --buildresult $build_dir/ pbuilder --build --configfile $build_dir/pbuilderrc --buildresult \ $build_dir/ $this_dir/src/*.dsc # of course I want to use the .dput.cf in /home/myuser/ # and not in /root/ dput $build_dir/$arch/*.changes done

    Read the article

  • Grid Infrastructure Management Repository (GIMR) database now mandatory in Oracle GI 12.1.0.2

    - by Mike Dietrich
    During the installation of Oracle Grid Infrastructure 12.1.0.1 you've had the following option to choose YES/NO to install the Grid Infrastructure Management Repository (GIMR) database MGMTDB: With Oracle Grid Infrastructure 12.1.0.2 this choice has become obsolete and the above screen does not appear anymore. The GIMR database has become mandatory.  What gets stored in the GIMR? See the documentation here See the changes in Oracle Clusterware 12.1.0.2 here: Automatic Installation of Grid Infrastructure Management Repository The Grid Infrastructure Management Repository is automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). The Grid Infrastructure Management Repository enables such features as Cluster Health Monitor, Oracle Database QoS Management, and Rapid Home Provisioning, and provides a historical metric repository that simplifies viewing of past performance and diagnosis of issues. This capability is fully integrated into Oracle Enterprise Manager Cloud Control for seamless management. Furthermore what the doc doesn't say explicitly: The -MGMTDB has now become a single-tenant deployment having a CDB with one PDB This will allow the use of a Utility Cluster that can hold the CDB for a collection of GIMR PDBs When you've had already an Oracle 12.1.0.1 GIMR this database will be destroyed and recreated Preserving the CHM/OS data can be acchieved with OCULMON to dump it out into node view The data files associated with it will be created within the same disk group as OCR and VOTING  In a future release there may be an option offered to put in into a separate disk group Some important MOS Notes: MOS Note 1568402.1FAQ: 12c Grid Infrastructure Management Repository, states there's no supported procedure to enable Management Database once the GI stack is configured MOS Note 1589394.1How to Move GI Management Repository to Different Shared Storage(shows how to delete and recreate the MGMTDB) MOS Note 1631336.1Cannot delete Management Database (MGMTDB) in 12.1 -Mike

    Read the article

  • git svn on multiple machines

    - by stgtscc
    My repo is SVN and I'm using git-svn to interface with it which has been working out well. I'm working on the code base from a few different machines and appreciate some insight as to what the best setup might be for me going forward. I'd like to use git primarily but I need to commit to svn (via git svn dcommit) and pull from svn (git svn rebase) periodically from potentially any of the machines. Is it possible to perhaps have git svn setup on all but somehow push and pull changes between the instances? Or should I setup a bare repo and use that as the central git repo? How would that tie in to git svn? Any insight is appreciated.

    Read the article

  • Bash: Reset and Clear Commands

    - by sixtyfootersdude
    I have been using the command: reset to clear my terminal. Although I am pretty sure this is not what I should be doing. Reset, as the name suggests resets your entire terminal (changes lots of stuff). Here is what I want: I basically want to use the command clear. However if you clear and then scowl up you still get tones of stuff from before. In general this is not a problem however I am looking at gross logs that are long and I want to make sure that I am just viewing the most recent one. I know that I could use more or something like that but I prefer this approach.

    Read the article

  • Help, broken Gsettings

    - by Rene
    I was trying to disable the global menu as per http://ubuntuhandbook.org/index.php/2013/07/disable-global-menu-on-ubuntu-13-10-saucy/#comment-8612, but while it didn't change anything, after running the autoremove command unity-tweak-tool broke. Obviously my first reaction was to re-install the removed package but it remains broken. TBH I don't know if it is even related or just a coincidence. When I start it from the launcher it just blinks and disappear. When I start it from terminal I get this error: $ gnome-tweak-tool WARNING : Shell not installed or running WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 199, in __init__ raise Exception("Shell not running or DBus service not available") Exception: Shell not running or DBus service not available INFO : GSettings missing key org.gnome.nautilus.desktop (key computer-icon-visible) WARNING : Shell not running None INFO : GSettings missing key org.gnome.mutter (key workspaces-only-on-primary) Segmentation fault (core dumped) I had a look with dconf-editor if I could just add the missing key, but apparently keys aren't meant to be added "by hand". So how can I fix this? I'd rather prefer not having to reinstall everything. Which package is broken, can I just reinstall that? EDIT: I found by being root gnome-tweak-tool no longer crashed so possibly a permission issue somewhere. I don't know that I changed any permissions. Another related problem, actually the reason I noticed the problem at all, is that unity-tweak-tool seem no longer to want to save the values edited. I normally just have the Unity launcher on the primary display but wanted to check what it was like having it on both. I didn't like it so I went into unity-tweak-tool to set it back - but regardless how many time I tick "only primary display" it never changes anything. What does the Unity-tweak-tool actually change and can I do this directly somehow?

    Read the article

  • Change Data Capture

    - by Ricardo Peres
    There's an hidden gem in SQL Server 2008: Change Data Capture (CDC). Using CDC we get full audit capabilities with absolutely no implementation code: we can see all changes made to a specific table, including the old and new values! You can only use CDC in SQL Server 2008 Standard or Enterprise, Express edition is not supported. Here are the steps you need to take, just remember SQL Agent must be running: use SomeDatabase; -- first create a table CREATE TABLE Author ( ID INT NOT NULL PRIMARY KEY IDENTITY(1, 1), Name NVARCHAR(20) NOT NULL, EMail NVARCHAR(50) NOT NULL, Birthday DATE NOT NULL ) -- enable CDC at the DB level EXEC sys.sp_cdc_enable_db -- check CDC is enabled for the current DB SELECT name, is_cdc_enabled FROM sys.databases WHERE name = 'SomeDatabase' -- enable CDC for table Author, all columns exec sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'Author', @role_name = null -- insert values into table Author insert into Author (Name, EMail, Birthday, Username) values ('Bla', 'bla@bla', 1990-10-10, 'bla') -- check CDC data for table Author -- __$operation: 1 = DELETE, 2 = INSERT, 3 = BEFORE UPDATE 4 = AFTER UPDATE -- __$start_lsn: operation timestamp select * from cdc.dbo_author_CT -- update table Author update Author set EMail = '[email protected]' where Name = 'Bla' -- check CDC data for table Author select * from cdc.dbo_author_CT -- delete from table Author delete from Author -- check CDC data for table Author select * from cdc.dbo_author_CT -- disable CDC for table Author -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_table @source_schema = 'dbo', @source_name = 'Author', @capture_instance = 'dbo_Author' -- disable CDC for the entire DB -- this removes all CDC data, so be carefull exec sys.sp_cdc_disable_db SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.all();

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >