Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 594/2605 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • SQL Server Column Level Encryption - Rotating Keys

    - by BarDev
    We are thinking about using SQL Server Column (cell) Level Encryption for sensitive data. There should be no problem when we initially encryption the column, but we have requirements that every year the Encryption Key needs to change. It seems that this requirement may be problem. Assumption: The table that includes the column that has sensitive data will have 500 million records. Below are the steps we have thought about implementing. During the encryption/decryption process is the data online, and also how long would this process take? Initially encrypt the column New Year Decrypt the column Encrypt the column with new key. Question : When the column is being decrypted/encrypted is the data online (available to be query)? Does SQL Server provide feature that allows for key changes while the data is online? BarDev

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights  und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Zukunftsmusik auf der Oracle OpenWorld 2013

    - by Alliances & Channels Redaktion
    "The future begins at Oracle OpenWorld", das Motto weckt große Erwartungen! Wie die Zukunft aussehen könnte, davon konnten sich 60.000 Besucherinnen und Besucher aus 145 Ländern vor Ort in San Francisco selbst überzeugen: In sage und schreibe 2.555 Sessions – verteilt über Downtown San Francisco – ging es dort um Zukunftstechnologien und neue Entwicklungen. Wie soll man zusammenfassen, was insgesamt 3.599 Speaker, fast die Hälfte übrigens Kunden und Partner, in vier Tagen an technologischen Visionen entwickelt und präsentiert haben? Nehmen wir ein konkretes Beispiel, das in diversen Sessions immer wieder auftauchte: Das „Internet of Things“, sprich „intelligente“ Alltagsgegenstände, deren eingebaute Minicomputer ohne den Umweg über einen PC miteinander kommunizieren und auf äußere Einflüsse reagieren. Für viele ist das heute noch Neuland, doch die Weiterentwicklung des Internet of Things eröffnet für Oracle, wie auch für die Partner, ein spannendes Arbeitsfeld und natürlich auch einen neuen Markt. Die omnipräsenten Fokus-Themen der viertägigen größten Hauskonferenz von Oracle hießen in diesem Jahr Customer Experience und Human Capital Management. Spannend für Partner waren auch die Strategien und die Roadmap von Oracle sowie die Neuigkeiten aus den Bereichen Engineered Systems, Cloud Computing, Business Analytics, Big Data und Customer Experience. Neue Rekorde stellte die Oracle OpenWorld auch im Netz auf: Mehr als 2,1 Millionen Menschen besuchten diese Veranstaltung online und nutzten dabei über 224 Social-Media Kanäle – fast doppelt so viele wie noch vor einem Jahr. Die gute Nachricht: Die Oracle OpenWorld bleibt online, denn es besteht nach wie vor die Möglichkeit, OnDemand-Videos der Keynote- und Session-Highlights anzusehen: Gehen Sie einfach auf Conference Video Highlights und wählen Sie aus acht Bereichen entweder eine Zusammenfassung oder die vollständige Keynote beziehungsweise Session. Dort finden Sie auch Videos der eigenen Fach-Konferenzen, die im Umfeld der Oracle OpenWorld stattfanden: die JavaOne, die MySQL Connect und der Oracle PartnerNetwork Exchange. Beim Oracle PartnerNetwork Exchange wurden, ganz auf die Fragen und Bedürfnisse der Oracle Partner zugeschnitten, Themen wie Cloud für Partner, Applications, Engineered Systems und Hardware, Big Data, oder Industry Solutions behandelt, und es gab, ganz wichtig, viel Gelegenheit zu Austausch und Vernetzung. Konkret befassten sich dort beispielsweise Sessions mit Cloudanwendungen im Gesundheitsbereich, mit der Erstellung überzeugender Business Cases für Kundengespräche oder mit Mobile und Social Networking. Die aus Deutschland angereisten über 40 Partner trafen sich beim OPN Exchange zu einem anregenden gemeinsamen Abend mit den anderen Teilnehmern. Dass die Oracle OpenWorld auch noch zum sportlichen Highlight werden würde, kam denkbar unerwartet: Zeitgleich mit der Konferenz wurde nämlich in der Bucht von San Francisco die entscheidende 19. Etappe des Americas Cup ausgetragen. Im traditionsreichen Segelwettbewerb lag Team Oracle USA zunächst mit 1:8 zurück, schaffte es aber dennoch, den Sieg vor dem lange Zeit überlegenen Team Neuseeland zu holen und somit den Titel zu verteidigen. Selbstverständlich fand die Oracle OpenWorld auch ein großes Medienecho. Wir haben eine Auswahl für Sie zusammengestellt: - ChannelPartner- Computerwoche - Heise - Silicon über Big Data - Silicon über 12c

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • Veeam giveaway

    - by marc dekeyser
    Hey everybody! As you might have noticed an extra banned has showed up on this site from veeam. Since they have decided to sponsor this blog (thank you very much all! Would not have happened without all of you!) I’ll periodically share some news from them with all of you. They are doing a big give away if you register on there site, one of them being a surface tablet, which you all know is brilliant!   Veeam is now featuring monthly prize drawings with some very exciting prizes. Entering is easy—just one entry is all that’s required for a chance to win every month until August 2013. Among the prizes, there are Microsoft Surface tablets, Apple iPads, and FREE passes to TechEd 2013 and VMworld 2013! Find out more about Veeam’s big giveaway.

    Read the article

  • SQLAuthority News – #TechEDIn – TechEd India 2012 – Things to Do and Explore for SQL Enthusiast

    - by pinaldave
    TechEd India 2012 is just 48 hours away and I have been receiving lots of requests regarding how SQL enthusiasts can maximize their time they’ll be spending at TechEd India 2012. Trust me – TechEd is the biggest Tech Event in India and it is much larger in magnitude than we can imagine. There are plenty of tracks there and lots of things to do. Honestly, we need clone ourselves multiple times to completely cover the event. However, I am going to talk about SQL enthusiasts only right now. In this post, I’ll share a few things they can do in this big event. But before I start talking about specific things, there is one thing which is a must – Keynote. There are amazing Keynotes planned every single day at TechEd India 2012. One should not miss them at all. Social Media I am a big believer of the social media. I am everywhere - Twitter, Facebook, LinkedIn and GPlus. I suggest you follow the tag #TechEdIn as well as contribute at the healthy conversation going on right now. You may want to follow a few of the SQL Server enthusiasts who are also attending events like TechEd India. This way, you will know where they are and you can contribute along with them. For a good start, you can follow all the speakers who are presenting at the event. I have linked all the speakers’ names with their respective Twitter accounts. Networking Do not stop meeting new people. Introduce yourself. Catch the speakers after their sessions. Meet other SQL experts and discuss SQL as well as life aside SQL. The best way to start the communication is to talk about something new. Here are a few lines I usually use when I have to break the ice: SQL Server 2012 is just released and I have installed it. How many SQL Server sessions are you going to attend? I am going to attend _________ I am a big fan of SQL Server. Sessions Agenda Day 1 T-SQL Rediscovered with SQL Server 2012 - Jacob Sebastian Catapult your data with SQL Server 2012 integration services - Praveen Srivatsa Processing Big Data with SQL Server 2012 and Hadoop  - Stephan Forte SQL Server Misconceptions and Resolution – A Practical Perspective – Pinal Dave and Vinod Kumar Securing with ContainedDB in SQL Server 2012  - Pranab Majumdar Agenda Day 2 Hand-on-Lab – Exploring Power View with SQL Server 2012 – Ravi S. Maniam Hand-on-Lab - SQL Server 2012 – AlwaysOn Availability Groups  - Amit Ganguli Agenda Day 3 Peeling SQL Server like an Onion: Internals Debunked  - Vinod Kumar Speed Up! – Parallel Processes and Unparalleled Performance  - Pinal Dave Keeping Your Database Available – ‘AlwaysOn’  - Balmukund Lakhani Lesser Known Facts of SQL Server Backup and Restore  - Amit Banerjee Top five reasons why you want SQL Server 2012 BI - Praveen Srivatsa Product Booth and Event Partners There will be a dedicated SQL Server booth at the event. I suggest you stop by there and do communication with SQL Server Experts. Additionally there will be booths of various event partners. Stop by their booth and see if they have a product which can help your career. I know that Pluralsight has recently released my course on their online learning site and if that interests you, you can talk about the subject with them. Bring Your Camera Make a list of the people you want to meet. Follow them on Twitter or send them an email and know their location. Introduce yourself, meet them and have your conversation. Do not forget to take a photo with them and later on, share the photo on social media. It would be nice to send an email to everyone with attached high resolution images if you have their email address. After-hours parties After-hours parties are not always about eating and meeting friends but sometimes, they are very informative. Last time I ended up meeting an SQL expert, and we end up talking for long hours on various aspects of SQL Server. After 4 hours, we figured out that he stays in the same apartment complex as mine and since we have had an excellent friendship, he has then become our family friend. So, my advice is that you start to seek out who is meeting where in the evening and see if you can get invited to the parties. Make new friends but never lose mutual respect by doing something silly. Meet Me I will be at the event for three days straight. I will be around the SQL tracks. Please stop by and introduce yourself. I would like to meet you and talk to you. Meeting folks from the Community is very important as we all speak the same language at the end of the day – SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • CodePlex Daily Summary for Thursday, April 05, 2012

    CodePlex Daily Summary for Thursday, April 05, 2012Popular ReleasesCommunity TFS Build Extensions: April 2012: Release notes to follow...ClosedXML - The easy way to OpenXML: ClosedXML 0.65.2: Aside from many bug fixes we now have Conditional Formatting The conditional formatting was sponsored by http://www.bewing.nl (big thanks) New on v0.65.1 Fixed issue when loading conditional formatting with default values for icon sets New on v0.65.2 Fixed issue loading conditional formatting Improved inserts performanceLiberty: v3.2.0.0 Release 4th April 2012: Change Log-Added -Halo 3 support (invincibility, ammo editing) -Halo 3: ODST support (invincibility, ammo editing) -The file transfer page now shows its progress in the Windows 7 taskbar -"About this build" settings page -Reach Change what an object is carrying -Reach Change which node a carried object is attached to -Reach Object node viewer and exporter -Reach Change which weapons you are carrying from the object editor -Reach Edit the weapon controller of vehicles and turrets -An error dia...MSBuild Extension Pack: April 2012: Release Blog Post The MSBuild Extension Pack April 2012 release provides a collection of over 435 MSBuild tasks. A high level summary of what the tasks currently cover includes the following: System Items: Active Directory, Certificates, COM+, Console, Date and Time, Drives, Environment Variables, Event Logs, Files and Folders, FTP, GAC, Network, Performance Counters, Registry, Services, Sound Code: Assemblies, AsyncExec, CAB Files, Code Signing, DynamicExecute, File Detokenisation, GUID’...DotNetNuke® Community Edition CMS: 06.01.05: Major Highlights Fixed issue that stopped users from creating vocabularies when the portal ID was not zero Fixed issue that caused modules configured to be displayed on all pages to be added to the wrong container in new pages Fixed page quota restriction issue in the Ribbon Bar Removed restriction that would not allow users to use a dash in page names. Now users can create pages with names like "site-map" Fixed issue that was causing the wrong container to be loaded in modules wh...51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.3.1: One Click Install from NuGet Changes to Version 2.1.3.11. [assembly: AllowPartiallyTrustedCallers] has been added back into the AssemblyInfo.cs file to prevent failures with other assemblies in Medium trust environments. 2. The Lite data embedded into the assembly has been updated to include devices from December 2011. The 42 new RingMark properties will return Unknown if RingMark data is not available. Changes to Version 2.1.2.11Code Changes 1. The project is now licenced under the Mozilla...MVC Controls Toolkit: Mvc Controls Toolkit 2.0.0: Added Support for Mvc4 beta and WebApi The SafeqQuery and HttpSafeQuery IQueryable implementations that works as wrappers aroung any IQueryable to protect it from unwished queries. "Client Side" pager specialized in paging javascript data coming either from a remote data source, or from local data. LinQ like fluent javascript api to build queries either against remote data sources, or against local javascript data, with exactly the same interface. There are 3 different query objects exp...ExtAspNet: ExtAspNet v3.1.2: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-04-04 v3.1.2 -??IE?????????????BUG(??"about:blank"?...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.50: Highlight features & improvements: • Significant performance optimization. • Allow store owners to create several shipments per order. Added a new shipping status: “Partially shipped”. • Pre-order support added. Enables your customers to place a Pre-Order and pay for the item in advance. Displays “Pre-order” button instead of “Buy Now” on the appropriate pages. Makes it possible for customer to buy available goods and Pre-Order items during one session. It can be managed on a product variant ...WiX Toolset: WiX v3.6 RC0: WiX v3.6 RC0 (3.6.2803.0) provides support for VS11 and a more stable Burn engine. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/4/3/WiX-v3.6-Release-Candidate-Zero-availableSageFrame: SageFrame 2.0: Sageframe is an open source ASP.NET web development framework developed using ASP.NET 3.5 with service pack 1 (sp1) technology. It is designed specifically to help developers build dynamic website by providing core functionality common to most web applications.iTuner - The iTunes Companion: iTuner 1.5.4475: Fix to parse empty playlists in iTunes LibraryDocument.Editor: 2012.2: Whats New for Document.Editor 2012.2: New Save Copy support New Page Setup support Minor Bug Fix's, improvements and speed upsDotNet.Highcharts: DotNet.Highcharts 1.2 with Examples: Tested and adapted to the latest version of Highcharts 2.2.1 Fixed Issue 359: Not implemented serialization array of type: System.Drawing.Color[] Fixed Issue 364: Crosshairs defined as array of CrosshairsForamt generates bad Highchart code For the example project: Added newest version of Highcharts 2.2.1 Added new demos to How To's section: Bind Data From Dictionary Bind Data From Object List Custom Theme Tooltip Crosshairs Theming The Reset Button Plot Band EventsVidCoder: 1.3.2: Added option for the minimum title length to scan. Added support to enable or disable LibDVDNav. Added option to prompt to delete source files after clearing successful completed items. Added option to disable remembering recent files and folders. Tweaked number box to only select all on a quick click.MJP's DirectX 11 Samples: Light Indexed Deferred Rendering: Implements light indexed deferred using per-tile light lists calculated in a compute shader, as well as a traditional deferred renderer that uses a compute shader for per-tile light culling and per-pixel shading.Pcap.Net: Pcap.Net 0.9.0 (66492): Pcap.Net - March 2012 Release Pcap.Net is a .NET wrapper for WinPcap written in C++/CLI and C#. It Features almost all WinPcap features and includes a packet interpretation framework. Version 0.9.0 (Change Set 66492)March 31, 2012 release of the Pcap.Net framework. Follow Pcap.Net on Google+Follow Pcap.Net on Google+ Files Pcap.Net.DevelopersPack.0.9.0.66492.zip - Includes all the tutorial example projects source files, the binaries in a 3rdParty directory and the documentation. It include...Extended WPF Toolkit: Extended WPF Toolkit - 1.6.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's in the 1.6.0 Release?BusyIndicator ButtonSpinner Calculator CalculatorUpDown CheckListBox - Breaking Changes CheckComboBox - New Control ChildWindow CollectionEditor CollectionEditorDialog ColorCanvas ColorPicker DateTimePicker DateTimeUpDown DecimalUpDown DoubleUpDown DropDownButton IntegerUpDown Magnifier MaskedTextBox MessageBox MultiLineTex...Media Companion: MC 3.434b Release: General This release should be the last beta for 3.4xx. If there are no major problems, by the end of the week it will upgraded to 3.500 Stable! The latest mc_com.exe should be included too! TV Bug fix - crash when using XBMC scraper for TV episodes. Bug fix - episode count update when adding new episodes. Bug fix - crash when actors name was missing. Enhanced TV scrape progress text. Enhancements made to missing episodes display. Movies Bug fix - hide "Play Trailer" when multisaev...Better Explorer: Better Explorer 2.0.0.831 Alpha: - A new release with: - many bugfixes - changed icon - added code for more failsafe registry usage on x64 systems - not needed regfix anymore - added ribbon shortcut keys - Other fixes Note: If you have problems opening system libraries, a suggestion was given to copy all of these libraries and then delete the originals. Thanks to Gaugamela for that! (see discussion here: 349015 ) Note2: I was upload again the setup due to missing file!New Projects20120229Site1: 29/02/2012 site5B12: Un servizio WCF .NET 4.0 fruibile in modalità REST da Android e IOS IPhone e IPad. Realizzato dai ragazzi di 5BI Informatica dell'ITIS L. Da Vinci - Rimini A.S. 2011/12Accept Language Setter: Accept Language Setter is a System Tray application that lets you quickly set your browser's Accept Language without having to navigate through your browser's menus. It is ideal for developers and testers who work on internationalized websites and need to switch languages often.AGatherBot: C# 5.0 (.NET Framework 4.5) based Counter-Strike 1.6 Gatherbot, highly extensible (using MEF) and configurable.Aweber API .NET SDK: The Aweber API .NET SDK makes it easier communicating with Aweber's API. It covers the basic parts of the API including, lists, subscribers, campaigns and accounts. It is developed in C# .NET 4.0CINET: Continuous Integration Server developed in .Net with MonoClipster: Manage & watch your videos instantly. A project for the course Human Computer Interaction, CS3240 in National University of SingaporeDyna Bomber: With my friend we will try to reborn old game called Dyna Bober.Fer. CR: Pruebas de Software 2012 - DemoFreeDAM: While looking at DAM (Digital Asset Management) solutions, I have found that some of the paid versions are actually just putting a front-end on top of free tools that are actually doing the heavy lifting. This project will give you the freedom to choose a free-dam solution.golang: Go language LLVM compilerHSDLL: HSDLLJsQueryExpression: JsQueryExpression aims to create a javascript library to generate URIs by using QueryExpression (name is inspired from Microsoft Dynamics CRM but structure is different) object model to query OData resources.KunalPishewsccsemb: KunalPishewsccsembLightFarsiDictionary - ??????? ??? ?????/???????: A light weight farsi-english / engilsh-farsi dictionary optimized for fixing common typo mistakes. ??????? ??? ? ???? ????? ?? ??????? / ??????? ?? ????? ?? ???? ?????? ?????? ?????? ?? ????? ?? ???.losezhang: ???TFS??,???30????!MIC Thailand #9 Foursquare: Sample for Maps and GeolocatiohnMtk: Mtk is a toolkit for ASP.NET MVC and FluentNHibernate.MyTest: test test testOpenSocial container on .NET: .NET implementation of OpenSocial containerOrchard Bootstrap: Bootstrap is an Orchard theme constructed using the Bootstrap template system from Twitter.PetroManager: this is a basic data management application.Portable Xna Components: Portable Xna Components makes it easier and faster for Xna game developers to create their games. Developers using this library don't have to worry about rudimentary tasks like drawing and collision detection. It is written in C#Project Management Plug-Ins: Restores functionality to create Microsoft Visio diagrams from Microsoft Project data, via a Microsoft Project AddIn.Project vit: some projectpruebas2012-1: demo de pruebas de softwarePSW20121: demo de pruebas de swPWS 2012-1: Demo para el curso de PWS UPN Cajamarca 2012 - 1Pws2012-1: Demo de calidad prueba de softwarePWSmodelo-2012-1: Demo Para le curso de pws upnradscommerce: Multi shop ecommerce system for web hoster, retailer for Indian Market and then for others.Randata: A DataProvider framework.Robots In Donutland: A solution for a project at DMU - Intelligent systems and roboticsShopScan - Application WP7 winner of Hackathon le Mobile 2012: Application WP7 winner of Hackathon le Mobile 2012Silverlight Printer Map: This printer map makes it easier for an end user in an enterprise to find and install networked printers. The end user need only find the closest printer on the map of the organization, click on it once and it then allow the program to map the printer, install the driver and set it as default printer. It's developed in C# The backend of this program relies on SharePoint 2010 libraries. See documentation for further information.SquareTasks: Gestionnaire de tâchesStructurizer: A tool to generate tiles for DNA self-assembly. Outputs a 2-dimensional map with corresponding glues which allow for use with any core substrate.WCF Duplex Service on Azure using a Silverlight Client: I had some trouble to build correctly a simple WCF Duplex Service that run on Windows AZURE while the Connection is made anonymously, after a lot of research I decided to share my solutionWord to MS-Test: Converts MS-Word document in a format similiar to FitNesse, to MS-Test source code.WPFModalDialog: Custom modal dialog in Wpf, contains basic controls in dialog window, also allow you to inform user about loading operations, using animated watcherXRM Developer Framework: XRM Developer Framework provides you with a Framework & Tools to easily build Enterprise Dynamics CRM 2011 Solutions using patterns & best practices. The Framework builds on many available technologies such as Visual Studio, CRM Developer Toolkit & Moq.

    Read the article

  • Monitoring Html Element CSS Changes in JavaScript

    - by Rick Strahl
    [ updated Feb 15, 2011: Added event unbinding to avoid unintended recursion ] Here's a scenario I've run into on a few occasions: I need to be able to monitor certain CSS properties on an HTML element and know when that CSS element changes. For example, I have a some HTML element behavior plugins like a drop shadow that attaches to any HTML element, but I then need to be able to automatically keep the shadow in sync with the window if the  element dragged around the window or moved via code. Unfortunately there's no move event for HTML elements so you can't tell when it's location changes. So I've been looking around for some way to keep track of the element and a specific CSS property, but no luck. I suspect there's nothing native to do this so the only way I could think of is to use a timer and poll rather frequently for the property. I ended up with a generic jQuery plugin that looks like this: (function($){ $.fn.watch = function (props, func, interval, id) { /// <summary> /// Allows you to monitor changes in a specific /// CSS property of an element by polling the value. /// when the value changes a function is called. /// The function called is called in the context /// of the selected element (ie. this) /// </summary> /// <param name="prop" type="String">CSS Properties to watch sep. by commas</param> /// <param name="func" type="Function"> /// Function called when the value has changed. /// </param> /// <param name="interval" type="Number"> /// Optional interval for browsers that don't support DOMAttrModified or propertychange events. /// Determines the interval used for setInterval calls. /// </param> /// <param name="id" type="String">A unique ID that identifies this watch instance on this element</param> /// <returns type="jQuery" /> if (!interval) interval = 200; if (!id) id = "_watcher"; return this.each(function () { var _t = this; var el$ = $(this); var fnc = function () { __watcher.call(_t, id) }; var itId = null; var data = { id: id, props: props.split(","), func: func, vals: [props.split(",").length], fnc: fnc, origProps: props, interval: interval }; $.each(data.props, function (i) { data.vals[i] = el$.css(data.props[i]); }); el$.data(id, data); hookChange(el$, id, data.fnc); }); function hookChange(el$, id, fnc) { el$.each(function () { var el = $(this); if (typeof (el.get(0).onpropertychange) == "object") el.bind("propertychange." + id, fnc); else if ($.browser.mozilla) el.bind("DOMAttrModified." + id, fnc); else itId = setInterval(fnc, interval); }); } function __watcher(id) { var el$ = $(this); var w = el$.data(id); if (!w) return; var _t = this; if (!w.func) return; // must unbind or else unwanted recursion may occur el$.unwatch(id); var changed = false; var i = 0; for (i; i < w.props.length; i++) { var newVal = el$.css(w.props[i]); if (w.vals[i] != newVal) { w.vals[i] = newVal; changed = true; break; } } if (changed) w.func.call(_t, w, i); // rebind event hookChange(el$, id, w.fnc); } } $.fn.unwatch = function (id) { this.each(function () { var el = $(this); var fnc = el.data(id).fnc; try { if (typeof (this.onpropertychange) == "object") el.unbind("propertychange." + id, fnc); else if ($.browser.mozilla) el.unbind("DOMAttrModified." + id, fnc); else clearInterval(id); } // ignore if element was already unbound catch (e) { } }); return this; } })(jQuery); With this I can now monitor movement by monitoring say the top CSS property of the element. The following code creates a box and uses the draggable (jquery.ui) plugin and a couple of custom plugins that center and create a shadow. Here's how I can set this up with the watcher: $("#box") .draggable() .centerInClient() .shadow() .watch("top", function() { $(this).shadow(); },70,"_shadow"); ... $("#box") .unwatch("_shadow") .shadow("remove"); This code basically sets up the window to be draggable and initially centered and then a shadow is added. The .watch() call then assigns a CSS property to monitor (top in this case) and a function to call in response. The component now sets up a setInterval call and keeps on pinging this property every time. When the top value changes the supplied function is called. While this works and I can now drag my window around with the shadow following suit it's not perfect by a long shot. The shadow move is delayed and so drags behind the window, but using a higher timer value is not appropriate either as the UI starts getting jumpy if the timer's set with too small of an increment. This sort of monitor can be useful for other things as well where operations are maybe not quite as time critical as a UI operation taking place. Can anybody see a better a better way of capturing movement of an element on the page?© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  JavaScript  jQuery  

    Read the article

  • securing server to server http post

    - by ad-inf
    Website is developed on JSF, Servlet, using apache web server. In my website, I accept data submission from few restricted websites using HTTP POST method. We exchange some secure key to ensure that correct source is sending data. But is there any way to ensure that the data is submitted from specific domain / IP address only? In application level I can check request.header('Referer') , but some proxy or firewall might hide the referer. Can this configuration done on firewall or webserver level to authenticate server to server communication? Eg. Say my website is a payment gateway website, integrated with www.abc.com. I want only abc.com to submit data. So a user using abc.com should be able to submit data to my website only through abc.com, and not any other website.

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • ORA-7445 Troubleshooting

    - by [email protected]
        QUICKLINK: Note 153788.1 ORA-600/ORA-7445 Lookup tool Note 1082674.1 : A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video]   Have you observed an ORA-07445 error reported in your alert log? While the ORA-600 error is "captured" as a handled exception in the Oracle source code, the ORA-7445 is an unhandled exception error due to an OS exception which should result in the creation of a core file.  An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces.  Looking for the best way to diagnose? Whenever an ORA-7445 error is raised a core file is generated.  There may be a trace file generated with the error as well.   Prior to 11g, the core files are located in the CORE_DUMP_DEST directory.   Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data.  Diagnostic files are written into a root directory for all diagnostic data called the ADR home.   Core files at 11g will go to the ADR HOME/cdump directory.   For more information on the Oracle 11g Diagnosability feature see Note 453125.1 11g Diagnosability Frequently Asked Questions Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Information to Support[Video]   NOTE:  While the core file is captured in the Diagnosability infrastructure, the file may not be included with a diagnostic package.1.  Check the Alert LogThe alert log may indicate additional errors or other internal errors at the time of the problem.   In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors.  The ORA-7445 error can be side effects of the other problems and you should review the first error and associated core file or trace file and work down the list of errors.   Note 1020463.6 DIAGNOSING ORA-3113 ERRORS Note 1812.1 TECH:  Getting a Stack Trace from a CORE file Note 414966.1 RDA Documentation Index   If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If you see a message at the end of the file   "MAX DUMP FILE SIZE EXCEEDED"   the MAX_DUMP_FILE_SIZE parameter is not setup high enough or to 'unlimited'. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult.  Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. For pointers on deeper analysis of these errors see   Note 390293.1 Introduction to 600/7445 Internal Error Analysis Note 211909.1 Customer Introduction to ORA-7445 Errors 2.  Search 600/7445 Lookup Tool Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 153788.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out key stack pointers from the associated trace file to match up against known bugs.3.  "Fine tune" searches in Knowledge Base As the ORA-7445 error indicates an unhandled exception in the Oracle source code, your search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set.  The more you can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match your problem. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] NOTE:  If no trace file is captured, see Note 1812.1 TECH:  Getting a Stack Trace from a CORE file.  Core files are managed through 11g Diagnosability, but are not packaged with other diagnostic data automatically.  The core files can be quite large, but may be useful during analysis within Oracle Support.4.  If assistance is required from Oracle Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the Alert log  Associated tracefile(s) or incident package at 11g Patch level  information Core file(s)  Information about changes in configuration and/or application prior to  issues  If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors RDA report or Oracle Configuration Manager information Oracle Configuration Manager Quick Start Guide Note 548815.1 My Oracle Support Configuration Management FAQ Note 414966.1 RDA Documentation Index ***For reference to the content in this blog, refer to Note.1092832.1 Master Note for Diagnosing ORA-600

    Read the article

  • Join us at the Gartner MDM Summit in LA on April 4-5

    - by Dain C. Hansen
    This year, we're proud to announce that Oracle is a Platinum sponsor of the Gartner Master Data Management Summit this April 4 – 5, 2012 in Los Angeles. The event will be a follow-on event from the Gartner BI Summit, so if you already attending that event, stick around on Wednesday and Thursday and don't miss it. Especially, don't miss our key session at 9:30 AM on Thursday April 4th, "Brace for Impact: Key Trends in Master Data Management" with Ford Goodman and Dain Hansen. Master Data Management helps organizations perform better by creating a single coherent version of customers, products and suppliers. But how do you get started? And if you've laid a foundation, how do you become world-class? Designed to address all MDM maturity levels, Gartner Master Data Management Summit delivers the tools, technologies and best practices to help you take control of your master data and dramatically improve business performance. Attendees will have the opportunity to meet with Oracle experts in a variety of sessions, including demonstrations during the showcase receptions. Oracle Customer Case Study and Solution Provider Session Oracle Solution Showcase Receptions Oracle Face-to-Face Meetings Hot topics to be covered: Forecasting key trends shaping MDM Building a business-driven MDM program Assessing MDM maturity Creating the MDM organization Evolving to multidomain MDM Learn more about this event, or to register, click here

    Read the article

  • Library like ENet, but for TCP?

    - by Milo
    I'm not looking to use boost::asio, it is overly complex for my needs. I'm building a game that is cross platform, for desktop, iPhone and Android. I found a library called ENet which is pretty much what I need, but it uses UDP which does not seem to support encryption and a few other things. Given that the game is an event driven card game, TCP seems like the right fit. However, all I have found is WINSOCK / berkley sockets and bost::asio. Here is a sample client server application with ENet: #include <enet/enet.h> #include <stdlib.h> #include <string> #include <iostream> class Host { ENetAddress address; ENetHost * server; ENetHost* client; ENetEvent event; public: Host() :server(NULL) { enet_initialize(); setupServer(); } void setupServer() { if(server) { enet_host_destroy(server); server = NULL; } address.host = ENET_HOST_ANY; /* Bind the server to port 1234. */ address.port = 1721; server = enet_host_create (& address /* the address to bind the server host to */, 32 /* allow up to 32 clients and/or outgoing connections */, 2 /* allow up to 2 channels to be used, 0 and 1 */, 0 /* assume any amount of incoming bandwidth */, 0 /* assume any amount of outgoing bandwidth */); } void daLoop() { while(true) { /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (server, & event, 5000) > 0) { ENetPacket * packet; switch (event.type) { case ENET_EVENT_TYPE_CONNECT: printf ("A new client connected from %x:%u.\n", event.peer -> address.host, event.peer -> address.port); /* Store any relevant client information here. */ event.peer -> data = "Client information"; /* Create a reliable packet of size 7 containing "packet\0" */ packet = enet_packet_create ("packet", strlen ("packet") + 1, ENET_PACKET_FLAG_RELIABLE); /* Extend the packet so and append the string "foo", so it now */ /* contains "packetfoo\0" */ enet_packet_resize (packet, strlen ("packetfoo") + 1); strcpy ((char*)& packet -> data [strlen ("packet")], "foo"); /* Send the packet to the peer over channel id 0. */ /* One could also broadcast the packet by */ /* enet_host_broadcast (host, 0, packet); */ enet_peer_send (event.peer, 0, packet); /* One could just use enet_host_service() instead. */ enet_host_flush (server); break; case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; case ENET_EVENT_TYPE_DISCONNECT: printf ("%s disconected.\n", event.peer -> data); /* Reset the peer's client information. */ event.peer -> data = NULL; } } } } ~Host() { if(server) { enet_host_destroy(server); server = NULL; } atexit (enet_deinitialize); } }; class Client { ENetAddress address; ENetEvent event; ENetPeer *peer; ENetHost* client; public: Client() :peer(NULL) { enet_initialize(); setupPeer(); } void setupPeer() { client = enet_host_create (NULL /* create a client host */, 1 /* only allow 1 outgoing connection */, 2 /* allow up 2 channels to be used, 0 and 1 */, 57600 / 8 /* 56K modem with 56 Kbps downstream bandwidth */, 14400 / 8 /* 56K modem with 14 Kbps upstream bandwidth */); if (client == NULL) { fprintf (stderr, "An error occurred while trying to create an ENet client host.\n"); exit (EXIT_FAILURE); } /* Connect to some.server.net:1234. */ enet_address_set_host (& address, "192.168.2.13"); address.port = 1721; /* Initiate the connection, allocating the two channels 0 and 1. */ peer = enet_host_connect (client, & address, 2, 0); if (peer == NULL) { fprintf (stderr, "No available peers for initiating an ENet connection.\n"); exit (EXIT_FAILURE); } /* Wait up to 5 seconds for the connection attempt to succeed. */ if (enet_host_service (client, & event, 20000) > 0 && event.type == ENET_EVENT_TYPE_CONNECT) { std::cout << "Connection to some.server.net:1234 succeeded." << std::endl; } else { /* Either the 5 seconds are up or a disconnect event was */ /* received. Reset the peer in the event the 5 seconds */ /* had run out without any significant event. */ enet_peer_reset (peer); puts ("Connection to some.server.net:1234 failed."); } } void daLoop() { ENetPacket* packet; /* Create a reliable packet of size 7 containing "packet\0" */ packet = enet_packet_create ("backet", strlen ("backet") + 1, ENET_PACKET_FLAG_RELIABLE); /* Extend the packet so and append the string "foo", so it now */ /* contains "packetfoo\0" */ enet_packet_resize (packet, strlen ("backetfoo") + 1); strcpy ((char*)& packet -> data [strlen ("backet")], "foo"); /* Send the packet to the peer over channel id 0. */ /* One could also broadcast the packet by */ /* enet_host_broadcast (host, 0, packet); */ enet_peer_send (event.peer, 0, packet); /* One could just use enet_host_service() instead. */ enet_host_flush (client); while(true) { /* Wait up to 1000 milliseconds for an event. */ while (enet_host_service (client, & event, 1000) > 0) { ENetPacket * packet; switch (event.type) { case ENET_EVENT_TYPE_RECEIVE: printf ("A packet of length %u containing %s was received from %s on channel %u.\n", event.packet -> dataLength, event.packet -> data, event.peer -> data, event.channelID); /* Clean up the packet now that we're done using it. */ enet_packet_destroy (event.packet); break; } } } } ~Client() { atexit (enet_deinitialize); } }; int main() { std::string a; std::cin >> a; if(a == "host") { Host host; host.daLoop(); } else { Client c; c.daLoop(); } return 0; } I looked at some socket tutorials and they seemed a bit too low level. I just need something that abstracts away the platform (eg, no WINSOCKS) and that has basic ability to keep track of connected clients and send them messages. Thanks

    Read the article

  • Oracle apresenta resultados do ano

    - by pfolgado
    A Oracle acabou de apresentar os resultados do 4º trimestre e do ano fiscal FY11. Os resultados mais relevantes são: Receitas de Vendas cresceram 33%, atingindo um total de 35,6 mil milhões de dólares Vendas de Novas licenças cresceram 23% Receitas de Hardware de 4,4 mil milhões de dólares Resultados operacionais cresceram 39% Resultados por acção de cresceram 38% para 1,67 dólares “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” Oracle Reports Q4 GAAP EPS Up 34% To 62 Cents; Q4 NON-GAAP EPS Up 25% To 75 Cents Q4 Software New License Sales Up 19%, Q4 Total Revenue Up 13% Oracle today announced fiscal 2011 Q4 GAAP total revenues were up 13% to $10.8 billion, while non-GAAP total revenues were up 12% to $10.8 billion. Both GAAP and non-GAAP new software license revenues were up 19% to $3.7 billion. Both GAAP and non-GAAP software license updates and product support revenues were up 15% to $4.0 billion. Both GAAP and non-GAAP hardware systems products revenues were down 6% to $1.2 billion. GAAP operating income was up 32% to $4.4 billion, and GAAP operating margin was 40%. Non-GAAP operating income was up 19% to $5.2 billion, and non-GAAP operating margin was 48%. GAAP net income was up 36% to $3.2 billion, while non-GAAP net income was up 27% to $3.9 billion. GAAP earnings per share were $0.62, up 34% compared to last year while non-GAAP earnings per share were up 25% to $0.75. GAAP operating cash flow on a trailing twelve-month basis was $11.2 billion. For fiscal year 2011, GAAP total revenues were up 33% to $35.6 billion, while non-GAAP total revenues were up 33% to $35.9 billion. Both GAAP and non-GAAP new software license revenues were up 23% to $9.2 billion. GAAP software license updates and product support revenues were up 13% to $14.8 billion, while non-GAAP software license updates and product support revenues were up 13% to $14.9 billion. Both GAAP and non-GAAP hardware systems products revenues were $4.4 billion. GAAP operating income was up 33% to $12.0 billion, and GAAP operating margin was 34%. Non-GAAP operating income was up 27% to $15.9 billion, and non-GAAP operating margin was 44%. GAAP net income was up 39% to $8.5 billion, while non-GAAP net income was up 34% to $11.4 billion. GAAP earnings per share were $1.67, up 38% compared to last year while non-GAAP earnings per share were up 33% to $2.22. “In Q4, we achieved a 19% new software license growth rate with almost no help from acquisitions,” said Oracle President and CFO, Safra Catz. “This strong organic growth combined with continuously improving operational efficiencies enabled us to deliver a 48% operating margin in the quarter. As our results reflect, we clearly exceeded even our own high expectations for Sun’s business.” “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” In addition, Oracle also announced that its Board of Directors declared a quarterly cash dividend of $0.06 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 13, 2011, with a payment date of August 3, 2011.

    Read the article

  • 2012 Independent Oracle Users Group Survey: Closing the Security Gap

    - by jgelhaus
    What Security Gaps Do You Have? The latest survey report from the Independent Oracle Users Group (IOUG) uncovers trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. According to the report, “despite growing threats and enterprise data security risks, organizations that do implement appropriate detective, preventive, and administrative controls are seeing significant results.” Download a free copy of the 2012 IOUG Data Security Survey Report and find out what your business can do to close the security gap. 

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Lockdown Your Database Security

    - by Troy Kitch
    A new article in Oracle Magazine outlines a comprehensive defense-in-depth approach for appropriate and effective database protection. There are multiple ways attackers can disrupt the confidentiality, integrity and availability of data and therefore, putting in place layers of defense is the best measure to protect your sensitive customer and corporate data. “In most organizations, two-thirds of sensitive and regulated data resides in databases,” points out Vipin Samar, vice president of database security technologies at Oracle. “Unless the databases are protected using a multilayered security architecture, that data is at risk to be read or changed by administrators of the operating system, databases, or network, or hackers who use stolen passwords to pose as administrators. Further, hackers can exploit legitimate access to the database by using SQL injection attacks from the Web. Organizations need to mitigate all types of risks and craft a security architecture that protects their assets from attacks coming from different sources.” Register and read more in the online magazine format.

    Read the article

  • Two more Entity Framework videos on Pluralsight

    Two new videos that I have created for Visual Studio 2010 have just been published to Pluralsight On-Demand. If you dont have a Pluralsight subscription (yet), these videos are available as part of PODs free guest pass along with lots of other great content. The new vids are Exploring the Classes Generated from an Entity Data Model and Consuming an Entity Data Model from a Separate .NET Project. Theyll be available on the MSDN Data Developer center as well (msdn.microsoft.com/data) in the very...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ASPNET WebAPI REST Guidance

    - by JoshReuben
    ASP.NET Web API is an ideal platform for building RESTful applications on the .NET Framework. While I may be more partial to NodeJS these days, there is no denying that WebAPI is a well engineered framework. What follows is my investigation of how to leverage WebAPI to construct a RESTful frontend API.   The Advantages of REST Methodology over SOAP Simpler API for CRUD ops Standardize Development methodology - consistent and intuitive Standards based à client interop Wide industry adoption, Ease of use à easy to add new devs Avoid service method signature blowout Smaller payloads than SOAP Stateless à no session data means multi-tenant scalability Cache-ability Testability   General RESTful API Design Overview · utilize HTTP Protocol - Usage of HTTP methods for CRUD, standard HTTP response codes, common HTTP headers and Mime Types · Resources are mapped to URLs, actions are mapped to verbs and the rest goes in the headers. · keep the API semantic, resource-centric – A RESTful, resource-oriented service exposes a URI for every piece of data the client might want to operate on. A REST-RPC Hybrid exposes a URI for every operation the client might perform: one URI to fetch a piece of data, a different URI to delete that same data. utilize Uri to specify CRUD op, version, language, output format: http://api.MyApp.com/{ver}/{lang}/{resource_type}/{resource_id}.{output_format}?{key&filters} · entity CRUD operations are matched to HTTP methods: · Create - POST / PUT · Read – GET - cacheable · Update – PUT · Delete - DELETE · Use Uris to represent a hierarchies - Resources in RESTful URLs are often chained · Statelessness allows for idempotency – apply an op multiple times without changing the result. POST is non-idempotent, the rest are idempotent (if DELETE flags records instead of deleting them). · Cache indication - Leverage HTTP headers to label cacheable content and indicate the permitted duration of cache · PUT vs POST - The client uses PUT when it determines which URI (Id key) the new resource should have. The client uses POST when the server determines they key. PUT takes a second param – the id. POST creates a new resource. The server assigns the URI for the new object and returns this URI as part of the response message. Note: The PUT method replaces the entire entity. That is, the client is expected to send a complete representation of the updated product. If you want to support partial updates, the PATCH method is preferred DELETE deletes a resource at a specified URI – typically takes an id param · Leverage Common HTTP Response Codes in response headers 200 OK: Success 201 Created - Used on POST request when creating a new resource. 304 Not Modified: no new data to return. 400 Bad Request: Invalid Request. 401 Unauthorized: Authentication. 403 Forbidden: Authorization 404 Not Found – entity does not exist. 406 Not Acceptable – bad params. 409 Conflict - For POST / PUT requests if the resource already exists. 500 Internal Server Error 503 Service Unavailable · Leverage uncommon HTTP Verbs to reduce payload sizes HEAD - retrieves just the resource meta-information. OPTIONS returns the actions supported for the specified resource. PATCH - partial modification of a resource. · When using PUT, POST or PATCH, send the data as a document in the body of the request. Don't use query parameters to alter state. · Utilize Headers for content negotiation, caching, authorization, throttling o Content Negotiation – choose representation (e.g. JSON or XML and version), language & compression. Signal via RequestHeader.Accept & ResponseHeader.Content-Type Accept: application/json;version=1.0 Accept-Language: en-US Accept-Charset: UTF-8 Accept-Encoding: gzip o Caching - ResponseHeader: Expires (absolute expiry time) or Cache-Control (relative expiry time) o Authorization - basic HTTP authentication uses the RequestHeader.Authorization to specify a base64 encoded string "username:password". can be used in combination with SSL/TLS (HTTPS) and leverage OAuth2 3rd party token-claims authorization. Authorization: Basic sQJlaTp5ZWFslylnaNZ= o Rate Limiting - Not currently part of HTTP so specify non-standard headers prefixed with X- in the ResponseHeader. X-RateLimit-Limit: 10000 X-RateLimit-Remaining: 9990 · HATEOAS Methodology - Hypermedia As The Engine Of Application State – leverage API as a state machine where resources are states and the transitions between states are links between resources and are included in their representation (hypermedia) – get API metadata signatures from the response Link header - in a truly REST based architecture any URL, except the initial URL, can be changed, even to other servers, without worrying about the client. · error responses - Do not just send back a 200 OK with every response. Response should consist of HTTP error status code (JQuery has automated support for this), A human readable message , A Link to a meaningful state transition , & the original data payload that was problematic. · the URIs will typically map to a server-side controller and a method name specified by the type of request method. Stuff all your calls into just four methods is not as crazy as it sounds. · Scoping - Path variables look like you’re traversing a hierarchy, and query variables look like you’re passing arguments into an algorithm · Mapping URIs to Controllers - have one controller for each resource is not a rule – can consolidate - route requests to the appropriate controller and action method · Keep URls Consistent - Sometimes it’s tempting to just shorten our URIs. not recommend this as this can cause confusion · Join Naming – for m-m entity relations there may be multiple hierarchy traversal paths · Routing – useful level of indirection for versioning, server backend mocking in development ASPNET WebAPI Considerations ASPNET WebAPI implements a lot (but not all) RESTful API design considerations as part of its infrastructure and via its coding convention. Overview When developing an API there are basically three main steps: 1. Plan out your URIs 2. Setup return values and response codes for your URIs 3. Implement a framework for your API.   Design · Leverage Models MVC folder · Repositories – support IoC for tests, abstraction · Create DTO classes – a level of indirection decouples & allows swap out · Self links can be generated using the UrlHelper · Use IQueryable to support projections across the wire · Models can support restful navigation properties – ICollection<T> · async mechanism for long running ops - return a response with a ticket – the client can then poll or be pushed the final result later. · Design for testability - Test using HttpClient , JQuery ( $.getJSON , $.each) , fiddler, browser debug. Leverage IDependencyResolver – IoC wrapper for mocking · Easy debugging - IE F12 developer tools: Network tab, Request Headers tab     Routing · HTTP request method is matched to the method name. (This rule applies only to GET, POST, PUT, and DELETE requests.) · {id}, if present, is matched to a method parameter named id. · Query parameters are matched to parameter names when possible · Done in config via Routes.MapHttpRoute – similar to MVC routing · Can alternatively: o decorate controller action methods with HttpDelete, HttpGet, HttpHead,HttpOptions, HttpPatch, HttpPost, or HttpPut., + the ActionAttribute o use AcceptVerbsAttribute to support other HTTP verbs: e.g. PATCH, HEAD o use NonActionAttribute to prevent a method from getting invoked as an action · route table Uris can support placeholders (via curly braces{}) – these can support default values and constraints, and optional values · The framework selects the first route in the route table that matches the URI. Response customization · Response code: By default, the Web API framework sets the response status code to 200 (OK). But according to the HTTP/1.1 protocol, when a POST request results in the creation of a resource, the server should reply with status 201 (Created). Non Get methods should return HttpResponseMessage · Location: When the server creates a resource, it should include the URI of the new resource in the Location header of the response. public HttpResponseMessage PostProduct(Product item) {     item = repository.Add(item);     var response = Request.CreateResponse<Product>(HttpStatusCode.Created, item);     string uri = Url.Link("DefaultApi", new { id = item.Id });     response.Headers.Location = new Uri(uri);     return response; } Validation · Decorate Models / DTOs with System.ComponentModel.DataAnnotations properties RequiredAttribute, RangeAttribute. · Check payloads using ModelState.IsValid · Under posting – leave out values in JSON payload à JSON formatter assigns a default value. Use with RequiredAttribute · Over-posting - if model has RO properties à use DTO instead of model · Can hook into pipeline by deriving from ActionFilterAttribute & overriding OnActionExecuting Config · Done in App_Start folder > WebApiConfig.cs – static Register method: HttpConfiguration param: The HttpConfiguration object contains the following members. Member Description DependencyResolver Enables dependency injection for controllers. Filters Action filters – e.g. exception filters. Formatters Media-type formatters. by default contains JsonFormatter, XmlFormatter IncludeErrorDetailPolicy Specifies whether the server should include error details, such as exception messages and stack traces, in HTTP response messages. Initializer A function that performs final initialization of the HttpConfiguration. MessageHandlers HTTP message handlers - plug into pipeline ParameterBindingRules A collection of rules for binding parameters on controller actions. Properties A generic property bag. Routes The collection of routes. Services The collection of services. · Configure JsonFormatter for circular references to support links: PreserveReferencesHandling.Objects Documentation generation · create a help page for a web API, by using the ApiExplorer class. · The ApiExplorer class provides descriptive information about the APIs exposed by a web API as an ApiDescription collection · create the help page as an MVC view public ILookup<string, ApiDescription> GetApis()         {             return _explorer.ApiDescriptions.ToLookup(                 api => api.ActionDescriptor.ControllerDescriptor.ControllerName); · provide documentation for your APIs by implementing the IDocumentationProvider interface. Documentation strings can come from any source that you like – e.g. extract XML comments or define custom attributes to apply to the controller [ApiDoc("Gets a product by ID.")] [ApiParameterDoc("id", "The ID of the product.")] public HttpResponseMessage Get(int id) · GlobalConfiguration.Configuration.Services – add the documentation Provider · To hide an API from the ApiExplorer, add the ApiExplorerSettingsAttribute Plugging into the Message Handler pipeline · Plug into request / response pipeline – derive from DelegatingHandler and override theSendAsync method – e.g. for logging error codes, adding a custom response header · Can be applied globally or to a specific route Exception Handling · Throw HttpResponseException on method failures – specify HttpStatusCode enum value – examine this enum, as its values map well to typical op problems · Exception filters – derive from ExceptionFilterAttribute & override OnException. Apply on Controller or action methods, or add to global HttpConfiguration.Filters collection · HttpError object provides a consistent way to return error information in the HttpResponseException response body. · For model validation, you can pass the model state to CreateErrorResponse, to include the validation errors in the response public HttpResponseMessage PostProduct(Product item) {     if (!ModelState.IsValid)     {         return Request.CreateErrorResponse(HttpStatusCode.BadRequest, ModelState); Cookie Management · Cookie header in request and Set-Cookie headers in a response - Collection of CookieState objects · Specify Expiry, max-age resp.Headers.AddCookies(new CookieHeaderValue[] { cookie }); Internet Media Types, formatters and serialization · Defaults to application/json · Request Accept header and response Content-Type header · determines how Web API serializes and deserializes the HTTP message body. There is built-in support for XML, JSON, and form-urlencoded data · customizable formatters can be inserted into the pipeline · POCO serialization is opt out via JsonIgnoreAttribute, or use DataMemberAttribute for optin · JSON serializer leverages NewtonSoft Json.NET · loosely structured JSON objects are serialzed as JObject which derives from Dynamic · to handle circular references in json: json.SerializerSettings.PreserveReferencesHandling =    PreserveReferencesHandling.All à {"$ref":"1"}. · To preserve object references in XML [DataContract(IsReference=true)] · Content negotiation Accept: Which media types are acceptable for the response, such as “application/json,” “application/xml,” or a custom media type such as "application/vnd.example+xml" Accept-Charset: Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding: Which content encodings are acceptable, such as gzip. Accept-Language: The preferred natural language, such as “en-us”. o Web API uses the Accept and Accept-Charset headers. (At this time, there is no built-in support for Accept-Encoding or Accept-Language.) · Controller methods can take JSON representations of DTOs as params – auto-deserialization · Typical JQuery GET request: function find() {     var id = $('#prodId').val();     $.getJSON("api/products/" + id,         function (data) {             var str = data.Name + ': $' + data.Price;             $('#product').text(str);         })     .fail(         function (jqXHR, textStatus, err) {             $('#product').text('Error: ' + err);         }); }            · Typical GET response: HTTP/1.1 200 OK Server: ASP.NET Development Server/10.0.0.0 Date: Mon, 18 Jun 2012 04:30:33 GMT X-AspNet-Version: 4.0.30319 Cache-Control: no-cache Pragma: no-cache Expires: -1 Content-Type: application/json; charset=utf-8 Content-Length: 175 Connection: Close [{"Id":1,"Name":"TomatoSoup","Price":1.39,"ActualCost":0.99},{"Id":2,"Name":"Hammer", "Price":16.99,"ActualCost":10.00},{"Id":3,"Name":"Yo yo","Price":6.99,"ActualCost": 2.05}] True OData support · Leverage Query Options $filter, $orderby, $top and $skip to shape the results of controller actions annotated with the [Queryable]attribute. [Queryable]  public IQueryable<Supplier> GetSuppliers()  · Query: ~/Suppliers?$filter=Name eq ‘Microsoft’ · Applies the following selection filter on the server: GetSuppliers().Where(s => s.Name == “Microsoft”)  · Will pass the result to the formatter. · true support for the OData format is still limited - no support for creates, updates, deletes, $metadata and code generation etc · vnext: ability to configure how EditLinks, SelfLinks and Ids are generated Self Hosting no dependency on ASPNET or IIS: using (var server = new HttpSelfHostServer(config)) {     server.OpenAsync().Wait(); Tracing · tracability tools, metrics – e.g. send to nagios · use your choice of tracing/logging library, whether that is ETW,NLog, log4net, or simply System.Diagnostics.Trace. · To collect traces, implement the ITraceWriter interface public class SimpleTracer : ITraceWriter {     public void Trace(HttpRequestMessage request, string category, TraceLevel level,         Action<TraceRecord> traceAction)     {         TraceRecord rec = new TraceRecord(request, category, level);         traceAction(rec);         WriteTrace(rec); · register the service with config · programmatically trace – has helper extension methods: Configuration.Services.GetTraceWriter().Info( · Performance tracing - pipeline writes traces at the beginning and end of an operation - TraceRecord class includes aTimeStamp property, Kind property set to TraceKind.Begin / End Security · Roles class methods: RoleExists, AddUserToRole · WebSecurity class methods: UserExists, .CreateUserAndAccount · Request.IsAuthenticated · Leverage HTTP 401 (Unauthorized) response · [AuthorizeAttribute(Roles="Administrator")] – can be applied to Controller or its action methods · See section in WebApi document on "Claim-based-security for ASP.NET Web APIs using DotNetOpenAuth" – adapt this to STS.--> Web API Host exposes secured Web APIs which can only be accessed by presenting a valid token issued by the trusted issuer. http://zamd.net/2012/05/04/claim-based-security-for-asp-net-web-apis-using-dotnetopenauth/ · Use MVC membership provider infrastructure and add a DelegatingHandler child class to the WebAPI pipeline - http://stackoverflow.com/questions/11535075/asp-net-mvc-4-web-api-authentication-with-membership-provider - this will perform the login actions · Then use AuthorizeAttribute on controllers and methods for role mapping- http://sixgun.wordpress.com/2012/02/29/asp-net-web-api-basic-authentication/ · Alternate option here is to rely on MVC App : http://forums.asp.net/t/1831767.aspx/1

    Read the article

  • BI Applications 7.9.6.3 and EBS 12.1.3 Vision: Integrated Demo Environments

    - by Mike.Hallett(at)Oracle-BI&EPM
    If you need a combined BI-Applications + eBusiness Suite Applications demonstration environment, or for proof of concept work for your customers, then these versions of images on Oracle Virtual Box are now available for partners to download and use.  To get access to these images, Partners must be OPN members, specialised in OBI or BI-Apps.   This is an integrated Demo/Test Drive/POC/Self Enablement environment including two separate images (in English) representing the entire Oracle Stack – Applications, Middleware, Database, Operating System and Virtual Machine. Minimum Hardware requirements for each image to run separately 4GB RAM Minimum Hardware requirements for both images to run concurrently 8GB RAM Dual CPU 64 bit OS   BI Applications 7.9.6.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: BI Application Analytics demo data extracted from EBS 12.1.3 Vision for Financials and HR using EBS 12.1.3 Vision (image supplied) Built Integration to EBS 12.1.3 Vision image (provided). Fully functional BI Applications 7.9.6.3 software install and configuration Image can be connected to load any data from any other compatible source system. BI Apps Demo data is based on OOTB EBS Vision 12.1.3 Configured to run BI Apps data load for all other modules of EBS 12.1.3 Vision. Includes OBIEE Sample demonstration content Documented scripts for running presentations, demonstrations and Test Drives Image Size: 34GB zip, 84GB unzip.  Min Hardware 4GB RAM         EBS Vision 12.1.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: eBusiness Suite (EBS) Applications Vision 12.1.3 Standard Vision instance with all given setups, configurations and data Source system for BI Apps 7.9.6.3 Image Size: 76GB zip, 300GB unzip.  Min Hardware 4GB RAM Distribution: The Virtual Box images are posted on an external FTP server @ BI Applications 7.9.6.3 EBS12.1.3   To download, Partners need to request the current password to access the images.  To request the current ftp.oracle.com password and the password required to unzip the images, please email Marek Winiarski   Support Contact =  Marek Winiarski: Oracle Partner Solution Consultant

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 1

    - by Simon Cooper
    Before we look at the bytes comprising the CLR-specific data inside an assembly, we first need to understand the logical format of the metadata (For this post I only be looking at simple pure-IL assemblies; mixed-mode assemblies & other things complicates things quite a bit). Metadata streams Most of the CLR-specific data inside an assembly is inside one of 5 streams, which are analogous to the sections in a PE file. The name of each section in a PE file starts with a ., and the name of each stream in the CLR metadata starts with a #. All but one of the streams are heaps, which store unstructured binary data. The predefined streams are: #~ Also called the metadata stream, this stream stores all the information on the types, methods, fields, properties and events in the assembly. Unlike the other streams, the metadata stream has predefined contents & structure. #Strings This heap is where all the namespace, type & member names are stored. It is referenced extensively from the #~ stream, as we'll be looking at later. #US Also known as the user string heap, this stream stores all the strings used in code directly. All the strings you embed in your source code end up in here. This stream is only referenced from method bodies. #GUID This heap exclusively stores GUIDs used throughout the assembly. #Blob This heap is for storing pure binary data - method signatures, generic instantiations, that sort of thing. Items inside the heaps (#Strings, #US, #GUID and #Blob) are indexed using a simple binary offset from the start of the heap. At that offset is a coded integer giving the length of that item, then the item's bytes immediately follow. The #GUID stream is slightly different, in that GUIDs are all 16 bytes long, so a length isn't required. Metadata tables The #~ stream contains all the assembly metadata. The metadata is organised into 45 tables, which are binary arrays of predefined structures containing information on various aspects of the metadata. Each entry in a table is called a row, and the rows are simply concatentated together in the file on disk. For example, each row in the TypeRef table contains: A reference to where the type is defined (most of the time, a row in the AssemblyRef table). An offset into the #Strings heap with the name of the type An offset into the #Strings heap with the namespace of the type. in that order. The important tables are (with their table number in hex): 0x2: TypeDef 0x4: FieldDef 0x6: MethodDef 0x14: EventDef 0x17: PropertyDef Contains basic information on all the types, fields, methods, events and properties defined in the assembly. 0x1: TypeRef The details of all the referenced types defined in other assemblies. 0xa: MemberRef The details of all the referenced members of types defined in other assemblies. 0x9: InterfaceImpl Links the types defined in the assembly with the interfaces that type implements. 0xc: CustomAttribute Contains information on all the attributes applied to elements in this assembly, from method parameters to the assembly itself. 0x18: MethodSemantics Links properties and events with the methods that comprise the get/set or add/remove methods of the property or method. 0x1b: TypeSpec 0x2b: MethodSpec These tables provide instantiations of generic types and methods for each usage within the assembly. There are several ways to reference a single row within a table. The simplest is to simply specify the 1-based row index (RID). The indexes are 1-based so a value of 0 can represent 'null'. In this case, which table the row index refers to is inferred from the context. If the table can't be determined from the context, then a particular row is specified using a token. This is a 4-byte value with the most significant byte specifying the table, and the other 3 specifying the 1-based RID within that table. This is generally how a metadata table row is referenced from the instruction stream in method bodies. The third way is to use a coded token, which we will look at in the next post. So, back to the bytes Now we've got a rough idea of how the metadata is logically arranged, we can now look at the bytes comprising the start of the CLR data within an assembly: The first 8 bytes of the .text section are used by the CLR loader stub. After that, the CLR-specific data starts with the CLI header. I've highlighted the important bytes in the diagram. In order, they are: The size of the header. As the header is a fixed size, this is always 0x48. The CLR major version. This is always 2, even for .NET 4 assemblies. The CLR minor version. This is always 5, even for .NET 4 assemblies, and seems to be ignored by the runtime. The RVA and size of the metadata header. In the diagram, the RVA 0x20e4 corresponds to the file offset 0x2e4 Various flags specifying if this assembly is pure-IL, whether it is strong name signed, and whether it should be run as 32-bit (this is how the CLR differentiates between x86 and AnyCPU assemblies). A token pointing to the entrypoint of the assembly. In this case, 06 (the last byte) refers to the MethodDef table, and 01 00 00 refers to to the first row in that table. (after a gap) RVA of the strong name signature hash, which comes straight after the CLI header. The RVA 0x2050 corresponds to file offset 0x250. The rest of the CLI header is mainly used in mixed-mode assemblies, and so is zeroed in this pure-IL assembly. After the CLI header comes the strong name hash, which is a SHA-1 hash of the assembly using the strong name key. After that comes the bodies of all the methods in the assembly concatentated together. Each method body starts off with a header, which I'll be looking at later. As you can see, this is a very small assembly with only 2 methods (an instance constructor and a Main method). After that, near the end of the .text section, comes the metadata, containing a metadata header and the 5 streams discussed above. We'll be looking at this in the next post. Conclusion The CLI header data doesn't have much to it, but we've covered some concepts that will be important in later posts - the logical structure of the CLR metadata and the overall layout of CLR data within the .text section. Next, I'll have a look at the contents of the #~ stream, and how the table data is arranged on disk.

    Read the article

  • SQL SERVER What is Spatial Database? Developing with SQL Server Spatial and Deep Dive into Spatial

    What is Spatial Database?A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia)Today [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SSIS ForEachLoop Container

    - by Leonard Mwangi
    I recently had a client request to create an SSIS package that would loop through a set of data in SQL tables to allow them to complete their data transformation processes. Knowing that Integration Services does have ForEachLoop Container, I knew the task would be easy but the moment I jumped into it I figured there was no straight forward way to accomplish the task since for each didn’t really have a loop through the table enumerator. With the capabilities of integration Services, I was still confident that it was possible it was just a matter of creativity to get it done. I set out to discover what different ForEach Loop Editor Enumerators did and settled with Variable Enumerator.  Here is how I accomplished the task. 1.       Drop your ForEach Loop Container in your WorkArea. 2.       Create a few SSIS Variable that will contain the data. Notice I have assigned MyID_ID variable a value of “TEST’ which is not evaluated either. This variable will be assigned data from the database hence allowing us to loop. 3.       In the ForEach Loop Editor’s Collection select Variable Enumerator 4.       Once this is all set, we need a mechanism to grab the data from the SQL Table and assigning it to the variable. Fig: Select Top 1 record Fig: Assign Top 1 record to the variable 5.       Now all that’s required is a house cleaning process that will update the table that you are looping so that you can be able to grab the next record   A look of the complete package

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >