Search Results

Search found 2402 results on 97 pages for 'metadata discovery'.

Page 26/97 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Custom extensible file format for 2d tiled maps

    - by Christian Ivicevic
    I have implemented much of my game logic right now, but still create my maps with nasty for-loops on-the-fly to be able to work with something. Now I wanted to move on and to do some research on how to (un)serialize this data. (I do not search for a map editor - I am speaking of the map file itself) For now I am looking for suggestions and resources, how to implement a custom file format for my maps which should provide the following functionality (based on MoSCoW method): Must have Extensibility and backward compatibility Handling of different layers Metadata on whether a tile is solid or can be passed through Special serialization of entities/triggers with associated properties/metadata Could have Some kind of inclusion of the tileset to prevent having scattered files/tilesets I am developing with C++ (using SDL) and targetting only Windows. Any useful help, tips, suggestions, ... would be appreciated!

    Read the article

  • VLC 2.0: can't enable UPnP support

    - by Craig
    I'm attempting to use VLC 2.0.1 (Intel 64 bit) to connect to a UPnP server (Serviio 0.6.2 on OS X Snow Leopard). I've tried to enable the feature by right clicking the "Universal Plug 'n Play" node in the LOCAL NETWORK section of the Main Window. While I am able to select 'Enable', the setting doesn't seem to 'stick'. Moreover, UPnP isn't listed in the Playlist Service discovery section of the preferences. The 'Service discovery modules' is currently set to upnp_intel:sap. I am able to connect to the UPnP server with my Samsung TV, so I know that the service works.

    Read the article

  • BI&EPM Partner Training and Specialisation Update

    - by Mike.Hallett(at)Oracle-BI&EPM
    1.     Just a reminder for you to take the New Version OBI11g Exams to update your OPN Specialisation @ OPN Exam for OBI Suite 11g is Now LIVE 2.     Check for places on free / subsidised Partner specific Bootcamps which are being run in several countries (and you can always fly there... it is still lower cost than alternatives !) : a.     Exalytics OBI11g Partner Training 3-day hands-on Workshops b.     EPM Planning (Hyperion) V11.1.2 Implementation Hands-On Boot-camp c.     Endeca Information Discovery 3-Day Hands-on Training Boot-Camp 3.     Other Partner Events a.     Frankfurt, Dreieich, November 15: Oracle Endeca Information Discovery b.     Utrecht, November 14: Oracle Bi Test Drives c.     Vilvoorde, November 16: Oracle Bi Test Drives d.     London, November 20: Delivering Insight Across Your Business - Oracle Business Intelligence Workshop e.     Milano, November 13: Oracle Drive Better Business Outcomes with Big Data and Analytics You can also selectively filter search for courses via the Partner Events Calendar @ http://events.oracle.com/search/search?group=Events&keyword=OPN+Only Otherwise, it is worth checking the Oracle Partner Enablement BLOG for any BI / EPM news, especially the sub-Blogs on the right for each country.  And there are many Self-Paced Tutorials for BI&EPM Partners available on demand at any time There is also a monthly Partner Enablement Update (PDF) to find out the latest partner training on Oracle's new products and new releases.

    Read the article

  • Microsoft Sync. Framework with Azure on iOS

    - by Richard Jones
    A bit of a revelation this evening. I discovered something obvious, but missing from my understanding of the brilliant iOS example that ships with the Sync. Framework 4.0CTP It seems that on the server side if a record is edited, correctly only the fields that are modified gets sent down to your device (in my case an iPad). I was previously just blindly assuming that I'd get all fields down. I modified my Xcode population code (based on iOS sample) as follows: + (void)populateQCItems: (id)dict withMetadata:(id)metadata withContext:(NSManagedObjectContext*) context { QCItems *item = (QCItems *)[Utils populateOfflineEntity:dict withMetadata:metadata withContext:context]; if (item != nil) // modify new or existing live item { if ([dict valueForKey:@"Identifier"]) // new bit item.Identifier = [dict valueForKey:@"Identifier"]; if ([dict valueForKey:@"InspectionTypeID"]) // new bit item.InspectionTypeID = [dict valueForKey:@"InspectionTypeID"]; [item logEntity]; } } I hope this helps someone else; as I learnt this the hard way. Technorati Tags: Xcode, iOS, Azure, Sync Framework, Cloud

    Read the article

  • Can't Remote into Windows Server

    - by Brian
    Hello, I have a Dell server wired into the router. I was able to connect to it with my laptop (laptop is wireless) before my router died. My verizon router went kaput, and I got everything else back up and running on the wireless network other than the remoting in feature, even though I can access the server through windows explorer just fine. Any ideas why? What do I need to check? UPDATE: Interesting scenario, Network Discovery is off; I turn it on and save, but for some reason, even after that, network discovery is turning itself off... no idea why that is happening? Thanks.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • Automate Win8 network and sharing settings

    - by RafaelBarriola
    I am trying to create a unattended DVD installation of Windows 8, but I can't seem to find a solution do this. What I'm looking for are to have the same settings on the Network and Sharing Center, to have the Turn on network discovery "on" for both the private and guest/public within the Control Panel. For private and guest/public, I need to: Turn on network discovery Turn on printer and file sharing Turn off public folder sharing Turn on password protected sharing Use user accounts and passwords to connect to other computers I've been searching for days and have not found a solution yet.

    Read the article

  • Do you sign each of your source files with your name? [duplicate]

    - by regularfry
    Possible Duplicate: How do you keep track of the authors of code? One of my colleagues is in the habit of putting his name and email address in the head of each source file he works on, as author metadata. I am not; I prefer to rely on source control to tell me who I should be speaking to about a given set of functionality. Should I also be signing files I work on for any other reasons? Do you? If so, why? To be clear, this is in addition to whatever metadata for copyright and licensing information is included, and applies to both open sourced and proprietary code.

    Read the article

  • Bluetooth RFCOMM / SDP connection to a RS232 adapter in android

    - by ThePosey
    Hello All, I am trying to use the Bluetooth Chat sample API app that google provides to connect to a bluetooth RS232 adapter hooked up to another device. Here is the app for reference: http://developer.android.com/resources/samples/BluetoothChat/index.html And here is the spec sheet for the RS232 connector just for reference: http://serialio.com/download/Docs/BlueSnap-guide-4.77_Commands.pdf Well the problem is that when I go to connect to the device with: mmSocket.connect(); (BluetoothSocket::connect()) I always get an IOException error thrown by the connect() method. When I do a toString on the exception I get "Service discovery failed". My question is mostly what are the cases that would cause an IOException to get thrown in the connect method? I know those are in the source somewhere but I don't know exactly how the java layer that you write apps in and the C/C++ layer that contains the actual stacks interface. I know that it uses the bluez bluetooth stack which is written in C/C++ but not sure how that ties into the java layer which is what I would think is throwing the exception. Any help on pointing me to where I can try to dissect this issue would be incredible. Also just to note I am able to pair with the RS232 adapter just fine but I am never able to actually connect. Here is the logcat output for more reference: I/ActivityManager( 1018): Displayed activity com.example.android.BluetoothChat/.DeviceListActivity: 326 ms (total 326 ms) E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) D/BluetoothChat( 1729): onActivityResult -1 D/BluetoothChatService( 1729): connect to: 00:06:66:03:0C:51 D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_CONNECTING E/BluetoothChat( 1729): + ON RESUME + I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_CONNECTING I/BluetoothChatService( 1729): BEGIN mConnectThread E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 I/BluetoothChatService( 1729): CONNECTION FAIL TOSTRING: java.io.IOException: Service discovery failed D/BluetoothChatService( 1729): setState() STATE_CONNECTING - STATE_LISTEN D/BluetoothChatService( 1729): start D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_LISTEN I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID I/NotificationService( 1018): enqueueToast pkg=com.example.android.BluetoothChat callback=android.app.ITransientNotification$Stub$Proxy@446327c8 duration=0 I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID The device I'm trying to connect to is the 00:06:66:03:0C:51 which I can scan for and apparently pair with just fine. The below is merged from a similar question which was successfully resolved by the selected answer here: How can one connect to an rfcomm device other than another phone in Android? The Android API provides examples of using listenUsingRfcommWithServiceRecord() to set up a socket and createRfcommSocketToServiceRecord() to connect to that socket. I'm trying to connect to an embedded device with a BlueSMiRF Gold chip. My working Python code (using the PyBluez library), which I'd like to port to Android, is as follows: sock = bluetooth.BluetoothSocket(proto=bluetooth.RFCOMM) sock.connect((device_addr, 1)) return sock.makefile() ...so the service to connect to is simply defined as channel 1, without any SDP lookup. As the only documented mechanism I see in the Android API does SDP lookup of a UUID, I'm slightly at a loss. Using "sdptool browse" from my Linux host comes up empty, so I surmise that the chip in question simply lacks SDP support.

    Read the article

  • How to get the child elementsvalue , when the parent element contains different number of child Elem

    - by Subhen
    Hi, I have the following XML Structure: <DIDL-Lite xmlns="urn:schemas-upnp-org:metadata-1-0/DIDL-Lite/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:upnp="urn:schemas-upnp-org:metadata-1-0/upnp/" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0/"> <item id="1268" parentID="20" restricted="1"> <upnp:class>object.item.audioItem.musicTrack</upnp:class> <dc:title>Hey</dc:title> <dc:date>2010-03-17T12:12:26</dc:date> <upnp:albumArtURI dlna:profileID="PNG_TN" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0">URL/albumart/22.png</upnp:albumArtURI> <upnp:icon>URL/albumart/22.png</upnp:icon> <dc:creator>land</dc:creator> <upnp:artist>sland</upnp:artist> <upnp:album>Change</upnp:album> <upnp:genre>Rock</upnp:genre> <res protocolInfo="http-get:*:audio/mpeg:DLNA.ORG_PN=MP3;DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="9527987" duration="0:03:58">URL/1268.mp3</res> <res protocolInfo="http-get:*:image/png:DLNA.ORG_PN=PNG_TN;DLNA.ORG_CI=01;DLNA.ORG_FLAGS=00f00000000000000000000000000000" colorDepth="24" resolution="160x160">URL/albumart/22.png</res> </item> <item id="1269" parentID="20" restricted="1"> <upnp:class>object.item.audioItem.musicTrack</upnp:class> <dc:title>Indian </dc:title> <dc:date>2010-03-17T12:06:32</dc:date> <upnp:albumArtURI dlna:profileID="PNG_TN" xmlns:dlna="urn:schemas-dlna-org:metadata-1-0">URL/albumart/13.png</upnp:albumArtURI> <upnp:icon>URL/albumart/13.png</upnp:icon> <dc:creator>MC</dc:creator> <upnp:artist>MC</upnp:artist> <upnp:album>manimal</upnp:album> <upnp:genre>Rap</upnp:genre> <res protocolInfo="http-get:*:audio/mpeg:DLNA.ORG_PN=MP3;DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="8166707" duration="0:03:24">URL/1269.mp3</res> <res protocolInfo="http-get:*:image/png:DLNA.ORG_PN=PNG_TN;DLNA.ORG_CI=01;DLNA.ORG_FLAGS=00f00000000000000000000000000000" colorDepth="24" resolution="160x160">URL/albumart/13.png</res> </item> <item id="1277" parentID="20" restricted="1"> <upnp:class>object.item.videoItem.movie</upnp:class> <dc:title>IronMan_TeaserTrailer-full_NEW.mpg</dc:title> <dc:date>2010-03-17T12:50:24</dc:date> <upnp:genre>Unknown</upnp:genre> <res protocolInfo="http-get:*:video/mpeg:DLNA.ORG_PN=(NULL);DLNA.ORG_OP=01;DLNA.ORG_CI=0;DLNA.ORG_FLAGS=01700000000000000000000000000000" size="98926592" resolution="1920x1080" duration="0:02:30">URL/1277.mpg</res> </item> </DIDL-Lite> Here in the last Elemnt Item , there are few missing elements like icon,albumArtURi ..etc. Now whhile Ity to access the Values by following LINQ to XML query, var vAudioData = from xAudioinfo in xResponse.Descendants(ns + "DIDL-Lite").Elements(ns + "item").Where(x=>!x.Elements(upnp+"album").l orderby ((string)xAudioinfo.Element(upnp + "artist")).Trim() select new RMSMedia { strAudioTitle = ((string)xAudioinfo.Element(dc + "title")).Trim(),//((string)xAudioinfo.Attribute("audioAlbumcoverImage")).Trim()=="" ? ((string)xAudioinfo.Element("Song")).Trim():"" }; It show object reference is not set to reference of object. I do undersrand this is because of missing elements, is there any way I can pre-check if the elements exist , or check the number of elemnts inside item or any other way. Any help is deeply appreciated. Thanks, Subhen

    Read the article

  • Convert mp4 video to a format xbox 360 can play

    - by Björn Lindqvist
    Here is a video file my Xbox 360 refuses to play: $ MP4Box -info video.mp4 * Movie Info * Timescale 90000 - Duration 02:18:33.365 Fragmented File no - 2 track(s) File Brand mp42 - version 0 Created: GMT Sat Jul 21 07:08:55 2012 File has root IOD (9 bytes) Scene PL 0xff - Graphics PL 0xff - OD PL 0xff Visual PL: ISO Reserved Profile (0x7f) Audio PL: High Quality Audio Profile @ Level 2 (0x0f) No streams included in root OD iTunes Info: Encoder Software: HandBrake 0.9.6 2012022800 Track # 1 Info - TrackID 1 - TimeScale 90000 - Duration 02:18:33.235 Media Info: Language "Undetermined" - Type "vide:avc1" - 199318 samples Visual Track layout: x=0 y=0 width=1280 height=688 MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21 AVC/H264 Video - Visual Size 1280 x 688 AVC Info: 1 SPS - 1 PPS - Profile High @ Level 4.1 NAL Unit length bits: 32 Self-synchronized Track # 2 Info - TrackID 2 - TimeScale 48000 - Duration 02:18:33.365 Media Info: Language "English" - Type "soun:mp4a" - 389689 samples MPEG-4 Config: Audio Stream - ObjectTypeIndication 0x40 MPEG-4 Audio MPEG-4 Audio AAC LC - 6 Channel(s) - SampleRate 48000 Synchronized on stream 1 $ avconv -i video.mp4 avconv version 0.8.4-4:0.8.4-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:33 with gcc 4.6.3 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: mp42isomavc1 creation_time : 2012-07-21 07:08:55 encoder : HandBrake 0.9.6 2012022800 Duration: 02:18:33.36, start: 0.000000, bitrate: 2299 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x688, 1973 kb/s, 23.98 fps, 90k tbr, 90k tbn, 180k tbc Metadata: creation_time : 2012-07-21 07:08:55 Stream #0.1(eng): Audio: aac, 48000 Hz, 5.1, s16, 319 kb/s Metadata: creation_time : 2012-07-21 07:08:55 At least one output file must be specified What tool, such as ffmpeg or mencoder, and what magic command line incantation should I use to transcode this file into a format Xbox 360 can play? I want the transcode process to retain as good video quality as possible.

    Read the article

  • freebsd-update failure

    - by ctuffli
    Some time ago, I upgrade my FreeBSD box to 7.1-RC2, but now I'd like to move to 7.2-RELEASE. I tried running # uname -mrsi FreeBSD 7.1-RC2 i386 GENERIC # freebsd-update upgrade -r 7.2-RELEASE Looking up update.FreeBSD.org mirrors... 3 mirrors found. Fetching metadata signature for 7.1-RC2 from update4.FreeBSD.org... failed. Fetching metadata signature for 7.1-RC2 from update5.FreeBSD.org... failed. Fetching metadata signature for 7.1-RC2 from update2.FreeBSD.org... failed. No mirrors remaining, giving up. Substituting 7.1 for 7.2 gives the same error. Adding a --debug option shows the failure as being fetch: http://update4.FreeBSD.org/7.1-RC2/i386/latest.ssl: Not Found Is there any way to still do a binary upgrade of this system as the 7.1-RC* directories don't exist on http://update.freebsd.org anymore? Upgrading from source is an option, but I wanted to see if there was some way to salvage this installation.

    Read the article

  • How to delete removed devices from a mdadm RAID1?

    - by Kabuto
    I had to replace two hard drives in my RAID1. After adding the two new partitions the old ones are still showing up as removed while the new ones are only added as spare. I've had no luck removing the partitions marked as removed. Here's the RAID in question. Note the two devices (0 and 1) with state removed. $ mdadm --detail /dev/md1 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. /dev/md1: Version : 00.90 Creation Time : Thu May 20 12:32:25 2010 Raid Level : raid1 Array Size : 1454645504 (1387.26 GiB 1489.56 GB) Used Dev Size : 1454645504 (1387.26 GiB 1489.56 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 12 21:30:39 2013 State : clean, degraded Active Devices : 1 Working Devices : 3 Failed Devices : 0 Spare Devices : 2 UUID : 10d7d9be:a8a50b8e:788182fa:2238f1e4 Events : 0.8717546 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2 3 8 18 - spare /dev/sdb2 4 8 2 - spare /dev/sda2 How do I get rid of these devices and add the new partitions as active RAID devices? Update I seem to have gotten rid of them. My RAID is resyncing, but the two drives are still marked as spares and are number 3 and 4, which looks wrong. I'll have to wait for the resync to finish. All I did was to fix the metadata error by editing my mdadm.conf and rebooting. I tried rebooting before, but this time it worked for whatever reason. Number Major Minor RaidDevice State 3 8 2 0 spare rebuilding /dev/sda2 4 8 18 1 spare rebuilding /dev/sdb2 2 8 34 2 active sync /dev/sdc2

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • 'NoneType' object has no attribute 'get' error using SQLAlchemy

    - by Az
    I've been trying to map an object to a database using SQLAlchemy but have run into a snag. Version info if handy: [OS: Mac OSX 10.5.8 | Python: 2.6.4 | SQLAlchemy: 0.5.8] The class I'm going to map: class Student(object): def __init__(self, name, id): self.id = id self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.id, self.name) Background: Now, I've got a function that reads in the necessary information from a text database into these objects. The function more or less works and I can easily access the information from the objects. Before the SQLAlchemy code runs, the function will read in the necessary info and store it into the Class. There is a dictionary called students which stores this as such: students = {} students[id] = Student(<all the info from the various "reader" functions>) Afterwards, there is an "allocation" algorithm that will allocate projects to student. It does that well enough. The allocated_project remains as None if a student is unsuccessful in getting a project. SQLAlchemy bit: So after all this happens, I'd like to map my object to a database table. Using the documentation, I've used the following code to only map certain bits. I also begin to create a Session. from sqlalchemy import * from sqlalchemy.orm import * engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('id', Integer, primary_key=True), Column('name', String) ) metadata.create_all(engine) mapper(Student, students_table) Session = sessionmaker(bind=engine) sesh = Session() Now after that, I was curious to see if I could print out all the students from my students dictionary. for student in students.itervalues(): print student What do I get but an error: Traceback (most recent call last): File "~/FYP_Tests/FYP_Tests.py", line 140, in <module> print student File "/~FYP_Tests/Parties.py", line 30, in __str__ return "%s %s" %(self.id, self.name) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/attributes.py", line 158, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) AttributeError: 'NoneType' object has no attribute 'get' I'm at a loss as to how to resolve this issue, if it is an issue. If more information is required, please ask and I will provide it.

    Read the article

  • OperationalError "unable to open database file" processing query results with SQLAlchemy and SQLite3

    - by Peter
    I'm running into this little problem that I hope is just a dumb user error. It looks like some sort of a size limit with a query to a SQLite database. I managed to reproduce the issue with an in-memory DB and a simple script shown below. I can make it work by either reducing the number of records in the DB; or by reducing the size of each record; or by dropping the order_by() call. I am using Python 2.5.5 and SQLAlchemy 0.6.0 in a Cygwin environment. Thanks! #!/usr/bin/python from sqlalchemy.orm import sessionmaker import sqlalchemy import sqlalchemy.orm class Person(object): def __init__(self, name): self.name = name engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sessionmaker(bind=engine) metadata = sqlalchemy.schema.MetaData(bind=engine) person_table = sqlalchemy.Table('person', metadata, sqlalchemy.Column('id', sqlalchemy.types.Integer, primary_key=True), sqlalchemy.Column('name', sqlalchemy.types.String)) metadata.create_all(engine) sqlalchemy.orm.mapper(Person, person_table) session = Session() session.add_all([Person("012345678901234567890123456789012") for i in range(5000)]) session.commit() persons = session.query(Person).order_by(Person.name).all() print "count =", len(persons) session.close() The all() call to the query result fails with the OperationalError exception: Traceback (most recent call last): File "./stress.py", line 27, in <module> persons = session.query(Person).order_by(Person.name).all() File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1343, in all return list(self) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1451, in __iter__ return self._execute_and_instances(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1456, in _execute_and_instances mapper=self._mapper_zero_or_none()) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py", line 737, in execute clause, params or {}) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1109, in execute return Connection.executors[c](self, object, multiparams, params) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1186, in _execute_clauseelement return self.__execute_context(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1215, in __execute_context context.parameters[0], context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1284, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1282, in _cursor_execute self.dialect.do_execute(cursor, statement, parameters, context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file u'SELECT person.id AS person_id, person.name AS person_name \nFROM person ORDER BY person.name' ()

    Read the article

  • Reflection, get DataAnnotation attributes from buddy class.

    - by Feryt
    Hi. I need to check if property has specific attribute defined in its buddy class: [MetadataType(typeof(Metadata))] public sealed partial class Address { private sealed class Metadata { [Required] public string Address1 { get; set; } [Required] public string Zip { get; set; } } } How to check what properties has defined Required attribute? Thank you.

    Read the article

  • Semantic search in P2P networks

    - by Sneha
    hi, we are doing a project that involves semantic search in P2P networks. Basically we want to do a file searching/sharing mechanism that semantically relates files based on the data. we are using RDF to represent the files' metadata. We are stuck with the database part. every peer has a local repository that it uses to store metadata. how do we implement this store? please help..

    Read the article

  • The usage of Cassandra's internal keyspace "system"

    - by knorv
    The default Cassandra systems keyspace system is present in all Cassandra installations. Judging from the output of the describe keyspace command the keyspace it is used partly for "persistent metadata for the local node" (LocationInfo) and partly for "hinted handoff data". What persistent metadata for the local node is stored in system/LocationInfo? What is the definition of hinted handoff in Cassandra terminology? What hinted handoff data is stored in the system keyspace?

    Read the article

  • How to use EntityFramework connection string for Elmah?

    - by Andrii Kovalchuk
    In ELMAH for logging errors to the database you can write: <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="EducoparkEntities"/> However, if I use EntityFramework, this doesn't work because the connection string for EF contains metadata as well: <add name="EducoparkEntities" connectionString="metadata=res://*/EducoparkData.csdl|res://*/EducoparkData.ssdl|res://*/EducoparkData.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=(Local);Initial Catalog=...;User Id=...;Password=...;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> So, how can I use the EntityFramework connection string in Elmah?

    Read the article

  • eclipse+bzr (Or: DVCS + IDE)

    - by Adam Matan
    Hi, I have some projects on bzr code repositories shared with colleagues. Problem is, I really want to switch to eclipse in some projects, but I don't want to pollute the repository with the unnecessary metadata eclipse creates in its Workspaces. Any idea how to keep Eclipse's metadata outside my bzr repo? Adam

    Read the article

  • How do I do the SQL equivalent of "DISTINCT" in CouchDB?

    - by Blaine LaFreniere
    I have a bunch of MP3 metadata in couchDB. I want to return every album that is in the MP3 metadata, but no duplicates. A typical document looks like this: { "_id": "005e16a055ba78589695c583fbcdf7e26064df98", "_rev": "2-87aa12c52ee0a406084b09eca6116804", "name": "Fifty-Fifty Clown", "number": 15, "artist": "Cocteau Twins", "bitrate": 320, "album": "Stars and Topsoil: A Collection (1982-1990)", "path": "Cocteau Twins/Stars and Topsoil: A Collection (1982-1990)/15 - Fifty-Fifty Clown.mp3", "year": 0, "genre": "Shoegaze" }

    Read the article

  • SQLAlchemy unsupported type error - and table design issues?

    - by Az
    Hi there, back again with some more SQLAlchemy shenanigans. Let me step through this. My table is now set up as so: engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() students_table = Table('studs', metadata, Column('sid', Integer, primary_key=True), Column('name', String), Column('preferences', Integer), Column('allocated_rank', Integer), Column('allocated_project', Integer) ) metadata.create_all(engine) mapper(Student, students_table) Fairly simple, and for the most part I've been enjoying the ability to query almost any bit of information I want provided I avoid the error cases below. The class it is mapped from is: class Student(object): def __init__(self, sid, name): self.sid = sid self.name = name self.preferences = collections.defaultdict(set) self.allocated_project = None self.allocated_rank = 0 def __repr__(self): return str(self) def __str__(self): return "%s %s" %(self.sid, self.name) Explanation: preferences is basically a set of all the projects the student would prefer to be assigned. When the allocation algorithm kicks in, a student's allocated_project emerges from this preference set. Now if I try to do this: for student in students.itervalues(): session.add(student) session.commit() It throws two errors, one for the allocated_project column (seen below) and a similar error for the preferences column: sqlalchemy.exc.InterfaceError: (InterfaceError) Error binding parameter 4 - probably unsupported type. u'INSERT INTO studs (sid, name, allocated_rank, allocated_project) VALUES (?, ?, ?, ?, ?, ?, ?)' [1101, 'Muffett,M.', 1, 888 Human-spider relationships (Supervisor id: 123)] If I go back into my code I find that, when I'm copying the preferences from the given text files, it actually refers to the Project class which is mapped to a dictionary, using the unique project id's (pid) as keys. Thus, as I iterate through each student via their rank and to the preferences set, it adds not a project id, but the reference to the project id from the projects dictionary. students[sid].preferences[int(rank)].add(projects[int(pid)]) Now this is very useful to me since I can find out all I want to about a student's preferred projects without having to run another check to pull up information about the project id. The form you see in the error has the object print information passed as: return "%s %s (Supervisor id: %s)" %(self.proj_id, self.proj_name, self.proj_sup) My questions are: I'm trying to store an object in a database field aren't I? Would the correct way then, be copying the project information (project id, name, etc) into its own table, referenced by the unique project id? That way I can just have the project id field for one of the student tables just be an integer id and when I need more information, just join the tables? So and so forth for other tables? If the above makes sense, then how does one maintain the relationship with a column of information in one table which is a key index on another table? Does this boil down into a database design problem? Are there any other elegant ways of accomplishing this? Apologies if this is a very long-winded question. It's rather crucial for me to solve this, so I've tried to explain as much as I can, whilst attempting to show that I'm trying (key word here sadly) to understand what could be going wrong.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >