Search Results

Search found 27521 results on 1101 pages for '2012 memory manager kb'.

Page 359/1101 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Multiple column Union Query without duplicates

    - by Adam Halegua
    I'm trying to write a Union Query with multiple columns from two different talbes (duh), but for some reason the second column of the second Select statement isn't showing up in the output. I don't know if that painted the picture properly but here is my code: Select empno, job From EMP Where job = 'MANAGER' Union Select empno, empstate From EMPADDRESS Where empstate = 'NY' Order By empno The output looks like: EMPNO JOB 4600 NY 5300 MANAGER 5300 NY 7566 MANAGER 7698 MANAGER 7782 MANAGER 7782 NY 7934 NY 9873 NY Instead of 5300 and 7782 appearing twice, I thought empstate would appear next to job in the output. For all other empno's I thought the values in the fields would be (null). Am I not understanding Unions correctly, or is this how they are supposed to work? Thanks for any help in advance.

    Read the article

  • 10??OTN????????

    - by OTN-J Master
    10???????????????????????????????????????????????????????????????????[10/23(?)??]  Oracle Solaris ??????? #7 [10/23(?)??]  Oracle MySQL Tech Tour - Osaka[10/24(?)??]  WebLogic Server???[10/24(?)??]  ?????? Oracle Application Testing Suite??????[10/24(?)??]  ?92? ????! ???????? Oracle RAC ???????! -??????! RAC?????????? [10/25(?)??]  ?????! ???????????????????????(???)[10/25(?)??]  Oracle MySQL Tech Tour - Tokyo [10/26(?)??]  Oracle MySQL Tech Tour - Fukuoka[10/30(?)??] Oracle Days Tokyo 2012>>??????????????????(oracle.com???) Oracle Solaris ??????? #7 ???: 10?23?(?) 18:30~20:40???: ?????????? ?? 13F??????? ???: ???????????????? Solaris ????????? 7 ????? Oracle Solaris ??????????Oracle Solaris ????????????????? ????Oralce Solaris ????????????!??????????????????Oracle OpenWolrd 2012 ??????Oracle Solaris ?????????????????????? ?????? 3 ??????????????????????????? ????????????????????????Solaris ???????????????????????????????????????????????? >> ??·???????? ?????? Oracle MySQL Tech Tour - Osaka??! ???: 10?23?(?)13:30~17:00???: ??(??????????????????) ???:???MySQL?????????????????MySQL?????Oracle MySQL Tech Tour - Osaka??????????MySQL????·??????????????????????????????????????????????·????·?? ????? ??????MySQL??????????????????? ????????????????????????? ????????????????• MySQL???? ??????????????·??????????? • MySQL????????????????? • MySQL????????????????? MySQL??????????????????????????????????DBA????????IT??????MySQL????????????????????? >> ??·???????? ?????? ?29? WebLogic Server???@?? ???: 10?24?(?) 18:30~20:40 ???: ?????????? ??13F??????? ???: ?29? WebLogic Server???@???????????????WebLogic Server?????:???????????????????iPad???????????2???????????????WebLogic Server????????????????WebLogic Server??????????????????????·????????????????????????????????JDBC??? ?????????????????? ??????WebLogic Server?????:?????????WebLogic Server???????????????????????????????????????iPad?????????????????????????????????????????????WebLogic Server???Oracle JDeveloper????? ?Oracle Application Development Framework (ADF)????????? ?WebCenter Framework ????????????????????????????????????????????????????????????????????????????????????????WebLogic Server????????????????????????????????????WebLogic Server????????????WebLogic Server?????????????????????????????????????!???????????????????Java EE6???????????? >> ??·???????? ?????? ?Oracle Application Testing Suite??????????????????Web????????????????????·??? ??????! ???: 10?24?(?) 13:30 ~ 18:00 ???: ?????????? ?? ???:???Web????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle Application Testing Suite??Web???????????????????????????????????????????????????????????????????????????????????????????????????????????Oracle Application Testing Suite???????????????????????????????????? >> ??·???????? ?????? ?92? ????! ???????? Oracle RAC ???????! -??????! RAC??????????  ???: 10?24?(?)18:30~20:00???: ?????? ???????????? ???:18?????????????????????! ???????????92????RAC?????????????????????????????????????????RAC???????RAC????·???????·???????????????????????Oracle Database??????????????????????????Oracle RAC???????????????????????????????????????????????????RAC????????RAC??????????????RAC??????????????????????????RAC???????????????????????????????????????????????????????????????RAC???????????????????????????????????Oracle??????????????RAC????????????????????????????????????????????????????????????????????????RAC????????????????????????????????RAC?DB??? ???????????????? ????????! >> ??·???????? ?????? ?????! ???????????????????????(???)  ???: 10?25?(?)???:13:30~15:30 ???: 18:00~20:00???: ?????????? ?? ???:???????????????????????????????????????????? ·??????????????? ·?????????????????????????????? ·??????????????????????????Oracle Enterprise Manager?????Oracle Database Enterprise Edition?????????????????????????????????????????????????/????????????????????????????????????????????????????????????????????????????????????????????????????????·??????????????????? ??????????????????????????????????????>> ??·???????? ?????? Oracle MySQL Tech Tour - Tokyo  ???: 10?25?(?)13:30~17:00???: ?????????? ?? 13F??????????:???MySQL?????????????????MySQL?????Oracle MySQL Tech Tour - Tokyo??????????MySQL????·??????????????????????????????????????????????·????·?? ????? ??????MySQL??????????????????? ????????????????????????? ????????????????• MySQL???? ??????????????·??????????? • MySQL????????????????? • MySQL????????????????? MySQL??????????????????????????????????DBA????????IT??????MySQL?????????????????????>> ??·???????? ?????? Oracle MySQL Tech Tour - Fukuoka ???: 10?26?(?)13:30~17:00???: ?? ???????? ???? 8F??????? ???: ???MySQL?????????????????MySQL?????Oracle MySQL Tech Tour - Fukuoka??????????????? MySQL????·??????????????????????????????????????????????·????·?? ????? ??????MySQL??????????????????? ????????????????????????? ????????????????•MySQL???? ??????????????·??????????? •MySQL????????????????? •MySQL????????????????? MySQL??????????????????????????????????DBA????????IT??????MySQL????????????????????? >> ??·???????? ?????? Oracle Days 2012 Tokyo ???: 10?30?(?)·31(?)10:00 ~ 18:00 ???: ??????????? ???: ?????????????????????10????????Oracle OpenWorld???????????????????????????????????IT????????????????????????????????????????????????????????????????IT??????????>> ?????????? / ???????? ??????

    Read the article

  • Oracle 11gR2:????(RAC/Exadata), ??, ????

    - by Yusuke.Yamamoto
    Oracle Database 11g ?????????????(RAC/Exadata)????????????? ???? 2010/11/17:????? 2011/01/07:???????:Exadata/?? 2011/01/18:???????:Exadata/????????????? 2011/04/21:???????:???????????? 2011/04/21:???????:Exadata/??????????????????????????????????? 2011/06/27:??????:Oracle Exadata Database Machine ????1,000??? 2011/07/06:???????:Exadata/????(?????????????????) 2011/07/08:??????:SAP?Oracle Exadata Database Machine??? 2011/07/15:???????:Exadata/????NTT????KDDI????????????? 2011/07/19:???????:Exadata/????(?????????????????) 2011/08/29:???????:Exadata/?????????(?????????) 2011/08/29:???????:Exadata/????? 2011/08/29:???????:Exadata/????(??????????????) 2011/10/27:???????:Exadata/??? 2011/11/08:???????:Exadata/NTT??? 2012/02/16:???????:Exadata/??????????????? 2012/03/21:???????:Exadata/?????????????(?????????????????) 2012/03/22:???????:Exadata/????? ?? Oracle Database 11g R2 ??????? Oracle Database 11g ?????????(????) Oracle Database 11g ?????????(????:Exadata?) ??????? Oracle Database 11g R2(???/????) Oracle Database 11g R2 ??????? ?? ??? 2009?11?11? Oracle Exadata Database Machine Version 2 ???? 2009?11?17? Oracle Database 11g R2 ???? 2010?02?01? ?????????????????????????????? 2010?03?31? SAP ? Oracle Database 11g R2 ??????????ISV????????·??????????? 2010?05?18? Windows Server 2008 R2 / Windows 7 ?????????Oracle Database 10g R2 ??? 2010?06?23? Oracle Application Express 4.0 ???? 2010?07?09? ?? Windows RDBMS ?????(2009?)????????? 2010?08?17? TPC-C Benchmark Price/Performance ???????? 2010?09?13? Patch Set 11.2.0.2 for Linux ???? 2010?10?20? Oracle Exadata Database Machine X2 ???? 2010?11?17? Oracle Database 11g R2 ????1?? 2010?11?19? ?? Windows RDBMS ?????(2010????)????????????? 2011?03?29? Oracle SQL Developer 3.0 ???? 2011?06?27? Oracle Exadata Database Machine ????1,000????????????????·?????????????? 2011?07?08? SAP ? Oracle Exadata Database Machine ??? 2011?09?01? Oracle Database Express Edition 11g Release 2 ???? 2011?09?23? Patch Set 11.2.0.3 for Linux ???? 2011?11?14? Oracle Database Appliance ???? Oracle Database 11g ?????????(????) ????????????????????????????????(????)? ?[RAC]:Oracle Real Application Clusters(RAC) ???????? ????(??????????) [RAC] ??????????(???) ????? [RAC] ????(???) ?????·???????·??? [RAC] ????? ????·??????·?? ???? ???????(??????????????) [RAC]|???99.999%???????500???????????? - ITpro ??????????? [RAC] ????(????) [RAC] ???(???) ????????(???) [RAC] ??????(???????????) [RAC] Oracle Database 11g ?????????(????:Exadata?) ????????????????????????????????(????)? ?Exadata ??????Oracle Database 11g / Oracle Real Application Clusters(RAC) ?? ?()??????????????? KDDI(??????????????) NTT??? NTT???(???????????) ??????????????? ??? ????(??????) ????????????? ?????·???????·??? ??(??????????????) ?????(??????????) ?????????(SCSK) ?????(????????????) ??????????(???????????) ????(???????) ??????? ?????? ????/????·???????? ???????????(???????/NTT??????????) ????? ?????????????(????/????????????) ???? ????????????|?????? ????|?????????????2013?2????3??????????? - ITpro ??? ?????? ??? ?????(SCSK)|DWH?????????????? - IT Leaders ????(???????????)|???????????·??????????????????????? - oracledatabase.jp Customer Voice ????:????IT?????24??365????????????????????? ?Oracle9i Database ?????????????????????Oracle Database 11g ???????????????????????? Oracle9i Database ???????????????? Customer Voice ??????:Oracle Database 11g????????????????????? ?Oracle ASM ???????????????????I/O????????????????????????????????????? ??????? Oracle Database 11g R2(???/????) ???????????????? Oracle 11g R2 ????????? - IT Leaders ??????????11g R2?5???? - ??SE????Oracle??? - Think IT ????????????????????????~Oracle Database 11g Release2 ????????? - oracletech.jp ??????????? Oracle Database 11g Release 2(11gR2)|??????????? Oracle Exadata|??????????? ???????|???????????

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • SQLAuthority News – Guest Post – Performance Counters Gathering using Powershell

    - by pinaldave
    Laerte Junior Laerte Junior has previously helped me personally to resolve the issue with Powershell installation on my computer. He did awesome job to help. He has send this another wonderful article regarding performance counter for readers of this blog. I really liked it and I expect all of you who are Powershell geeks, you will like the same as well. As a good DBA, you know that our social life is restricted to a few movies over the year and, when possible, a pizza in a restaurant next to your company’s place, of course. So what we have to do is to create methods through which we can facilitate our daily processes to go home early, and eventually have a nice time with our family (and not sleeping on the couch). As a consultant or fixed employee, one of our daily tasks is to monitor performance counters using Perfmom. To be honest, IDE is getting more complicated. To deal with this, I thought a solution using Powershell. Yes, with some lines of Powershell, you can configure which counters to use. And with one more line, you can already start collecting data. Let’s see one scenario: You are a consultant who has several clients and has just closed another project in troubleshooting an SQL Server environment. You are to use Perfmom to collect data from the server and you already have its XML configuration files made with the counters that you will be using- a file for memory bottleneck f, one for CPU, etc. With one Powershell command line for each XML file, you start collecting. The output of such a TXT file collection is set to up in an SQL Server. With two lines of command for each XML, you make the whole process of data collection. Creating an XML configuration File to Memory Counters: Get-PerfCounterCategory -CategoryName "Memory" | Get-PerfCounterInstance  | Get-PerfCounterCounters |Save-ConfigPerfCounter -PathConfigFile "c:\temp\ConfigfileMemory.xml" -newfile Creating an XML Configuration File to Buffer Manager, counters Page lookups/sec, Page reads/sec, Page writes/sec, Page life expectancy: Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Page*" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" –NewFile Then you start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\ConfigfileMemory.xml -PathOutputFile c:\temp\ConfigfileMemory.txt To let the Buffer Manager collect, you need one more counters, including the Buffer cache hit ratio. Just add a new counter to BufferManager.xml, omitting the new file parameter Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters -CounterName "Buffer cache hit ratio" | Save-ConfigPerfCounter -PathConfigFile "c:\temp\BufferManager.xml" And start the collection: Set-CollectPerfCounter -DateTimeStart "05/24/2010 08:00:00" -DateTimeEnd "05/24/2010 22:00:00" -Interval 10 -PathConfigFile c:\temp\BufferManager.xml -PathOutputFile c:\temp\BufferManager.txt You do not know which counters are in the Category Buffer Manager? Simple! Get-PerfCounterCategory -CategoryName "SQLServer:Buffer Manager" | Get-PerfCounterInstance | Get-PerfCounterCounters Let’s see one output file as shown below. It is ready to bulk insert into the SQL Server. As you can see, Powershell makes this process incredibly easy and fast. Do you want to see more examples? Visit my blog at Shell Your Experience You can find more about Laerte Junior over here: www.laertejuniordba.spaces.live.com www.simple-talk.com/author/laerte-junior www.twitter.com/laertejuniordba SQL Server Powershell Extension Team: http://sqlpsx.codeplex.com/ Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Add-On, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: Powershell

    Read the article

  • New security options in UCM Patch Set 3

    - by kyle.hatlestad
    While the Patch Set 3 (PS3) release was mostly focused on bug fixes and such, some new features sneaked in there. One of those new features is to the security options. In 10gR3 and prior versions, UCM had a component called Collaboration Manager which allowed for project folders to be created and groups of users assigned as members to collaborate on documents. With this component came access control lists (ACL) for content and folders. Users could assign specific security rights on each and every document and folder within a project. And it was even possible to enable these ACL's without having the Collaboration Manager component enabled (see technote# 603148.1). When 11g came out, Collaboration Manager was no longer available. But the configuration settings to turn on ACLs were still there. Well, in PS3 they're implemented slightly differently. And there is a new component available which adds an additional dimension to define security on the object, Roles. So now instead of selecting individual users or groups of users (defined as an Alias in User Admin), you can select a particular role. And if a user has that role, they are granted that level of access. This can allow for a much more flexible and manageable security model instead of trying to manage with just user and group access as people come and go in the organization. The way that it is enabled is still through configuration entries. First log in as an administrator and go to Administration -> Admin Server. On the Component Manager page, click the 'advanced component manager' link in the description paragraph at the top. In the list of Disabled Components, enable the RoleEntityACL component. Then click the General Configuration link on the left. In the Additional Configuration Variables text area, enter the new configuration values: UseEntitySecurity=true SpecialAuthGroups=<comma separated list of Security Groups to honor ACLs> The SpecialAuthGroups should be a list of Security Groups that honor the ACL fields. If an ACL is applied to a content item with a Security Group outside this list, it will be ignored. Save the settings and restart the instance. Upon restart, three new metadata fields will be created: xClbraUserList, xClbraAliasList, xClbraRoleList. If you are using OracleTextSearch as the search indexer, be sure to run a Fast Rebuild on the collection. On the Check In, Search, and Update pages, values are added by simply typing in the value and getting a type-ahead list of possible values. Select the value, click Add and then set the level of access (Read, Write, Delete, or Admin). If all of the fields are blank, then it simply falls back to just Security Group and Account access. For Users and Groups, these values are automatically picked up from the corresponding database tables. In the case of Roles, this is an explicitly defined list of choices that are made available. These values must match the role that is being defined from WebLogic Server or you LDAP/AD repository. To add these values, go to Administration -> Admin Applets -> Configuration Manager. On the Views tab, edit the values for the ExternalRolesView. By default, 'guest' and 'authenticated' are added. Once added to through the view, they will be available to select from for the Roles Access List. As for how they are stored in the metadata fields, each entry starts with it's identifier: ampersand (&) symbol for users, "at" (@) symbol for groups, and colon (:) for roles. Following that is the entity name. And at the end is the level of access in paranthesis. e.g. (RWDA). And each entry is separated by a comma. So if you were populating values through batch loader or an external source, the values would be defined this way. Detailed information on Access Control Lists can be found in the Oracle Fusion Middleware System Administrator's Guide for Oracle Content Server.

    Read the article

  • Creating and using VM Groups in VirtualBox

    - by Fat Bloke
    With VirtualBox 4.2 we introduced the Groups feature which allows you to organize and manage your guest virtual machines collectively, rather than individually. Groups are quite a powerful concept and there are a few nice features you may not have discovered yet, so here's a bit more information about groups, and how they can be used.... Creating a group Groups are just ad hoc collections of virtual machines and there are several ways of creating a group: In the VirtualBox Manager GUI: Drag one VM onto another to create a group of those 2 VMs. You can then drag and drop more VMs into that group; Select multiple VMs (using Ctrl or Shift and click) then  select the menu: Machine...Group; or   press Cmd+U (Mac), or Ctrl+U(Windows); or right-click the multiple selection and choose Group, like this: From the command line: Group membership is an attribute of the vm so you can modify the vm to belong in a group. For example, to put the vm "Ubuntu" into the group "TestGroup" run this command: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup" Deleting a Group Groups can be deleted by removing a group attribute from all the VMs that constitute that group. To do this via the command-line the syntax is: VBoxManage modifyvm "Ubuntu" --groups "" In the VirtualBox Manager, this is more easily done by right-clicking on a group header and selecting "Ungroup", like this: Multiple Groups Now that we understand that Groups are just attributes of VMs, it can be seen that VMs can exist in multiple groups, for example, doing this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup","/ProjectX","/ProjectY" Results in: Or via the VirtualBox Manager, you can drag VMs while pressing the Alt key (Mac) or Ctrl (other platforms). Nested Groups Just like you can drag VMs around in the VirtualBox Manager, you can also drag whole groups around. And dropping a group within a group creates a nested group. Via the command-line, nested groups are specified using a path-like syntax, like this: VBoxManage modifyvm "Ubuntu" --groups "/TestGroup/Linux" ...which creates a sub-group and puts the VM in it. Navigating Groups In the VirtualBox Manager, Groups can be collapsed and expanded by clicking on the carat to the left in the Group Header. But you can also Enter and Leave groups too, either by using the right-arrow/left-arrow keys, or by clicking on the carat on the right hand side of the Group Header, like this: . ..leading to a view of just the Group contents. You can Leave or return to the parent in the same way. Don't worry if you are imprecise with your clicking, you can use a double click on the entire right half of the Group Header to Enter a group, and the left half to Leave a group. Double-clicking on the left half when you're at the top will roll-up or collapse the group.   Group Operations The real power of Groups is not simply in arranging them prettily in the Manager. Rather it is about performing collective operations on them, once you have grouped them appropriately. For example, let's say that you are working on a project (Project X) where you have a solution stack of: Database VM, Middleware/App VM, and  a couple of client VMs which you use to test your app. With VM Groups you can start the whole stack with one operation. Select the Group Header, and choose Start: The full list of operations that may be performed on Groups are: Start Starts from any state (boot or resume) Start VMs in headless mode (hold Shift while starting) Pause Reset Close Save state Send Shutdown signal Poweroff Discard saved state Show in filesystem Sort Conclusion Hopefully we've shown that the introduction of VM Groups not only makes Oracle VM VirtualBox pretty, but pretty powerful too.  - FB 

    Read the article

  • Scrambling Sensitive Data in E-Business Suite Release 12 Cloned Environments

    - by Elke Phelps (Oracle Development)
    Securing the Oracle E-Business Suite includes protecting the underlying E-Business data in production and non-production databases.  While steps can be taken to provide a secure configuration to limit EBS access, a better approach to protecting non-production data is simply to scramble (mask) the data in the non-production copy.  You can use the Oracle Data Masking Pack with Oracle Enterprise Manager today to scramble sensitive data in cloned environments. Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function.  Using the Data Masking Pack in E-Business Suite environments is now easier with the release of new set of templates for E-Business Suite databases: Oracle E-Business Suite Release 12.1.3 Template for Data Masking Pack (Patch13898999) This template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  Is there a charge for this? Yes. You must purchase licenses for Oracle Enterprise Manager and the Oracle Data Masking Pack plug-in. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application. How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing.   References For additional information on the Oracle E-Business Suite Template for Data Masking Pack please refer to the following: Masking Sensitive Data for Non-production Use in the Oracle Enterprise Manager Concepts 11g Using the Oracle E-Business Suite, Release 12.1.3 Template for the Data Masking Pack, Note 1437485.1 Related Articles Webcast Replay Available: E-Business Suite Data Protection Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD)

    - by swafbrother
    UEFI Dual-Boot - Ubuntu 12.04.3 + Windows 8.1 (One GPT HDD) Hello, I'm having trouble setting up a dual-boot (Ubuntu 12.04 LTS and Windows 8.1) in my ASUS K55VM laptop's hard drive disk (500 GB). I was mostly following tutorials for doing this, but at some point something has gone wrong. Up to now, I have followed these steps: I formatted my HDD into GPT. I clean-installed Windows 8.1. I didn't prevent Windows from choosing the partitions to use and it created these partitions: A Recovery partition (sda1). An EFI System Partition (sda2). A Microsoft Reserved Partition (sda3). A Windows Data Partition or C drive (sda4). I reduced the Windows Data Partition via Windows' Disk Management. I made a bootable USB Stick with Ubuntu 12.04 LTS from ISO, using Universal USB Installer. I created these partitions for Ubuntu: A Boot partition, mounted at /boot (sda5). A Root partition, mounted at / (sda6). A Swap partition (sda7). In Device for boot loader installation I chose: /dev/sda. Then, when I rebooted, it went straight into Ubuntu. So I installed Boot-Repair, and clicked on Recommended Repair. It automatically did its job without asking for anything. I rebooted and Grub showed up, with a lot of options. At this point I had a decent dual-boot setup; Ubuntu and both Windows entries worked fine: Ubuntu. Windows Boot UEFI Loader. Windows UEFI bkpbootmgfw.efi. I executed this command: sudo grub-install --force /dev/sda5. Then I tried to make Windows 8.1's Boot Manager the main boot manager, so that I could choose which OS to boot into from a menu. I downloaded EasyBCD on Windows. It showed 2 Ubuntu entries and 1 Windows entry. I went into BCD Deployment tab and clicked on Write MBR. At this point, I went into BIOS and made Windows Boot Manager the first boot option. When I rebooted, I got a black screen with the message efidisk read error, and then (I guess) it switched to the next boot option, which is Ubuntu, resulting in Grub showing up. From Grub, Ubuntu entry is working and so are both Windows entries. If I choose Ubuntu, it normally boots into Ubuntu. But if I choose Windows, it goes into Windows' boot manager. In Windows' boot manager, a menu shows up: Ubuntu. Ubuntu. Windows 8.1. If I choose Windows, it boots into Windows without any problem. If I choose Ubuntu, it boots into Grub (back to step 14). Here's my BootInfo Summary: http://paste.ubuntu.com/6698171/ Windows Boot Manager is clearly not working as expected; I can't directly boot into it and I can't boot into it from BIOS either (efidisk read error again). If I want to boot into Windows I need to boot into Grub first, which is the opposite of what I wanted. I need help at this point. What is the best thing I can do? Is there a more reliable and/or simpler way of acomplishing a satisfying dual-boot for this situation? Can someone provide a way for going back to step 8, where I had a more efficient dual-boot setup? If only I could undo what I did with Easy BCD and skip Windows' Boot Menu... Can someone provide a way to fix this mess? Thanks in advance and sorry for the length of this, I wanted to be exhaustive.

    Read the article

  • Realtek RTL8111/8168B wired network doesn't work anymore

    - by Radar4002
    This sounds like it's a common problem upgrading 11.04, but I am having trouble finding a common solution, and one that will work for me. I just applied updates via the update manager and now my wired network connection is down. I know Ubuntu network settings is the issue, because I have a dual-boot with Win 7 and my network/internet is fine on Win 7. I don't know too much about networking, so what can I do to trouble shoot this issue? I can choose an older grub version, 2.6.38-8 instead of 2.6.38-11 and this does not resolve the issue. Here is my lspci result: 00:00.0 Host bridge: ATI Technologies Inc RD890 Northbridge only single slot PCI-e GFX Hydra part (rev 02) 00:02.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port B) 00:04.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port D) 00:05.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port E) 00:06.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port F) 00:07.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port G) 00:09.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (PCI express gpp port H) 00:0a.0 PCI bridge: ATI Technologies Inc RD890 PCI to PCI bridge (external gfx1 port A) 00:11.0 SATA controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 SATA Controller [IDE mode] (rev 40) 00:12.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 41) 00:14.1 IDE interface: ATI Technologies Inc SB7x0/SB8x0/SB9x0 IDE Controller (rev 40) 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) (rev 40) 00:14.3 ISA bridge: ATI Technologies Inc SB7x0/SB8x0/SB9x0 LPC host controller (rev 40) 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge (rev 40) 00:14.5 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI2 Controller 00:15.0 PCI bridge: ATI Technologies Inc Device 43a0 00:16.0 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:16.2 USB Controller: ATI Technologies Inc SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control 01:00.0 VGA compatible controller: ATI Technologies Inc Juniper [Radeon HD 5700 Series] 01:00.1 Audio device: ATI Technologies Inc Juniper HDMI Audio [Radeon HD 5700 Series] 02:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) 05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) 07:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 07:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 03) 08:0e.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link) 09:00.0 SATA controller: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 02) 09:00.1 IDE interface: JMicron Technology Corp. JMB362/JMB363 Serial ATA Controller (rev 02) Here is my sudo lshw -class network: *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 03 serial: 6c:f0:49:e7:72:e8 size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:40 ioport:9e00(size=256) memory:fceff000-fcefffff memory:fcef8000-fcefbfff memory:fce00000-fce1ffff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth1 version: 03 serial: 6c:f0:49:e7:72:ea size: 10Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:47 ioport:8e00(size=256) memory:fddff000-fddfffff memory:fddf8000-fddfbfff memory:fdd00000-fdd1ffff

    Read the article

  • October 2013 Fusion Middleware (FMW) Proactive Patches released

    - by Irina
    We are glad to announce that the following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013.Bundle PatchesBundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU)Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released. Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU)The Critical Patch Update program is Oracle's quarterly release of security fixes.The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information: Master Notes on Fusion Middleware Proactive Patching PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • October 2013 FMW Proactive Patches Released

    - by mustafakaya
    The following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013. Bundle Patches : Bundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU) Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released.  Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU) : The Critical Patch Update program is Oracle's quarterly release of security fixes. The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information; Master Notes on Fusion Middleware Proactive Patching. PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013 

    Read the article

  • October 2013 Fusion Middleware (FMW) Proactive Patches released

    - by PCat
    We are glad to announce that the following Fusion Middleware (FMW) Proactive  patches were released on October 15, 2013.Bundle PatchesBundle patches are collections of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series includes the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. Oracle Identity Management Suite Bundle Patch 11.1.1.5.5 consisting of Oracle Identity Manager (OIM) 11.1.1.5.9 bundle patch Oracle Access Manager (OAM) 11.1.1.5.6 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.1.5.2 bundle patch. Oracle Entitlement Server (OES) 11.1.1.5.4 bundle patch. Oracle Identity Management Suite Bundle Patch 11.1.2.0.4 consisting of Oracle Access Manager (OAM) 11.1.2.0.4 bundle patch. Oracle Adaptive Access Manager (OAAM) 11.1.2.0.2 bundle patch. Oracle Entitlement Server (OES) 11.1.2.0.2 bundle patch. Oracle Identity Analytics (OIA ) 11.1.1.5.6  bundle patch. Oracle GlassFish Server (OGFS) 2.1.1.22, 3.0.1.8 and 3.1.2.7 bundle patches. Oracle iPlanet Web Server (OiWS) 7.0.18 bundle patch Oracle SOA Suite (SOA) 11.1.1.7.1 bundle patch Oracle WebCenter Portal (WCP) 11.1.1.8.1 bundle patch Sun Role Manager (SRM) 4.1.7 and 5.0.3.2 bundle patches. Patch Set Updates (PSU)Patch Set Updates (PSU)  are collections of well controlled, well tested critical bug fixes for a specific product  that have been proven in customer environments. PSUs  may include security contents but no  enhancements are included. These are cumulative in nature meaning the latest PSU  in a particular series includes the contents of the previous PSUs  released. Oracle Exalogic 2.0.3.0.4 Physical Linux x86-64 and 2.0.4.0.4 Physical Solaris x86-64 PSUs. Oracle WebLogic Server 10.3.6.0.6 and 12.1.1.0.6 PSUs. Critical Patch Update (CPU)The Critical Patch Update program is Oracle's quarterly release of security fixes.The following additional patches were released as part of Oracle's Critical Patch Update program: Oracle JDeveloper 11.1.2.3.0, 11.1.2.4.0 and 12.1.2.0.0 Oracle Outside In Technology 8.4.0 and  8.4.1 Oracle Portal 11.1.1.6.0 Oracle Security Service  11.1.1.6.0, 11.1.1.7.0 and 12.1.2.0.0 Oracle WebCache 11.1.1.6.0 and 11.1.1.7.0 Oracle WebCenter Content 10.1.3.5.1, 11.1.1.6.0, 11.1.1.7.0 and 11.1.1.8.0 Oracle WebServices 10.1.3.5.0 and 11.1.1.6.0 For more information: Master Notes on Fusion Middleware Proactive Patching PSU and CPU October 2013  Availability Document Critical Patch Update Advisory -  October 2013

    Read the article

  • Codeigniter: Using URIs with forms

    - by Kevin Brown
    I'm using URIs to direct a function in a library: $id = $this->CI->session->userdata('id'); $URI = $this->CI->uri->uri_string(); $new = "new"; if(strpos($URI, $new) === FALSE){ $method = "update"; } elseif(strpos($URI, $new) !== FALSE){ $method = "create"; } So I have two if statements directing what information to if ($method === 'update') { // Modify form, first load $this->CI->db->from('be_survey'); $this->CI->db->where('user_id' , $id); $survey = $this->CI->db->get(); $user = array_merge($user->row_array(),$survey->row_array()); $this->CI->validation->set_default_value($user); // Display page $data['user'] = $user; } $this->CI->validation->set_rules($rules); if ( $this->CI->validation->run() === FALSE ) { // Output any errors $this->CI->validation->output_errors(); } else { // Submit form $this->_submit($method); } Submit function: function _submit($method) { //Submit and Update for current User $id = $this->CI->session->userdata('id'); $this->CI->db->select('users.id, users.username, users.email, profiles.firstname, profiles.manager_id'); $this->CI->db->from('be_users' . " users"); $this->CI->db->join('be_user_profiles' . " profiles",'users.id=profiles.user_id'); $this->CI->db->having('id', $id); $email_data['user'] = $this->CI->db->get(); $email_data['user'] = $email_data['user']->row(); $manager_id = $email_data['user']->manager_id; $this->CI->db->select('firstname','email')->from('be_user_profiles')->where('user_id', $manager_id); $email_data['manager'] = $this->CI->db->get(); $email_data['manager'] = $email_data['manager']->row(); // Fetch what they entered in the form for($i=1;$i<18;$i++){ $survey["a_".$i]= $this->CI->input->post('a_'.$i); } for($i=1;$i<15;$i++){ $survey["b_".$i]= $this->CI->input->post('b_'.$i); } for($i=1;$i<12;$i++){ $survey["c_".$i]= $this->CI->input->post('c_'.$i); } $profile['firstname'] = $this->CI->input->post('firstname'); $profile['lastname'] = $this->CI->input->post('lastname'); $profile['test_date'] = date ("Y-m-d H:i:s"); $profile['company_name'] = $this->CI->input->post('company_name'); $profile['company_address'] = $this->CI->input->post('company_address'); $profile['company_city'] = $this->CI->input->post('company_city'); $profile['company_phone'] = $this->CI->input->post('company_phone'); $profile['company_state'] = $this->CI->input->post('company_state'); $profile['company_zip'] = $this->CI->input->post('company_zip'); $profile['job_title'] = $this->CI->input->post('job_title'); $profile['job_type'] = $this->CI->input->post('job_type'); $profile['job_time'] = $this->CI->input->post('job_time'); $profile['department'] = $this->CI->input->post('department'); $profile['vision'] = $this->CI->input->post('vision'); $profile['height'] = $this->CI->input->post('height'); $profile['weight'] = $this->CI->input->post('weight'); $profile['hand_dominance'] = $this->CI->input->post('hand_dominance'); $profile['areas_of_fatigue'] = $this->CI->input->post('areas_of_fatigue'); $profile['job_description'] = $this->CI->input->post('job_description'); $profile['injury_review'] = $this->CI->input->post('injury_review'); $profile['job_positive'] = $this->CI->input->post('job_positive'); $profile['risk_factors'] = $this->CI->input->post('risk_factors'); $profile['job_improvement_short'] = $this->CI->input->post('job_improvement_short'); $profile['job_improvement_long'] = $this->CI->input->post('job_improvement_long'); if ($method == "update") { //Begin db transmission $this->CI->db->trans_begin(); $this->CI->home_model->update('Survey',$survey, array('user_id' => $id)); $this->CI->db->update('be_user_profiles',$profile, array('user_id' => $id)); if ($this->CI->db->trans_status() === FALSE) { flashMsg('error','There was a problem entering your test! Please contact an administrator.'); redirect('survey','location'); } else { //Get credits of user and subtract 1 $this->CI->db->set('credits', 'credits -1', FALSE); $this->CI->db->update('be_user_profiles',$profile, array('user_id' => $manager_id)); //Mark the form completed. $this->CI->db->set('test_complete', '1'); $this->CI->db->where('user_id', $id)->update('be_user_profiles'); // Stuff worked... $this->CI->db->trans_commit(); //Get Manager Information $this->CI->db->select('users.id, users.username, users.email, profiles.firstname'); $this->CI->db->from('be_users' . " users"); $this->CI->db->join('be_user_profiles' . " profiles",'users.id=profiles.user_id'); $this->CI->db->having('id', $email_data['user']->manager_id); $email_data['manager'] = $this->CI->db->get(); $email_data['manager'] = $email_data['manager']->row(); //Email User $this->CI->load->library('User_email'); $data_user = array( 'firstname'=>$email_data['user']->firstname, 'email'=> $email_data['user']->email, 'user_completed'=>$email_data['user']->firstname, 'site_name'=>$this->CI->preference->item('site_name'), 'site_url'=>base_url() ); //Email Manager $data_manager = array( 'firstname'=>$email_data['manager']->firstname, 'email'=> $email_data['manager']->email, 'user_completed'=>$email_data['user']->firstname, 'site_name'=>$this->CI->preference->item('site_name'), 'site_url'=>base_url() ); $this->CI->user_email->send($email_data['manager']->email,'Completed the Assessment Tool','public/email_manager_complete',$data_manager); $this->CI->user_email->send($email_data['user']->email,'Completed the Assessment Tool','public/email_user_complete',$data_user); flashMsg('success','You finished the assessment successfully!'); redirect('home','location'); } } //Create New User elseif ($method == "create") { // Build $profile['user_id'] = $id; $profile['manager_id'] = $manager_id; $profile['test_complete'] = '1'; $survey['user_id'] = $id; $this->CI->db->trans_begin(); // Add user_profile details to DB $this->CI->db->insert('be_user_profiles',$profile); $this->CI->db->insert('be_survey',$survey); if ($this->CI->db->trans_status() === FALSE) { // Registration failed $this->CI->db->trans_rollback(); flashMsg('error',$this->CI->lang->line('userlib_registration_failed')); redirect('auth/register','location'); } else { // User registered $this->CI->db->trans_commit(); flashMsg('success',$this->CI->lang->line('userlib_registration_success')); redirect($this->CI->config->item('userlib_action_register'),'location'); } } } The submit function is similar, updating the db if $method == "update", and inserting if the method == "create". The problem is, when the form is submitted, it doesn't take into account the url b/c the form submits to the function "survey", which passes data to the lib function, so things are always updated, never created. *How can I pass $method to the _submit() function correctly?!*

    Read the article

  • Improving VPN performance - stronger encryption = more performance?

    - by Seth
    I have a site-to-site VPN set up with two SonicWall's (a TZ170 and a Pro1260). It was suggested to me that turning off encryption (so the VPN is tunneling only) would improve performance. (I'm not concerned with security, because the VPN is running over a trusted line.) Using FTP and HTTP transfers, I measured my baseline performance at about 130±10 kB/s. The Ipsec (Phase 2) Encryption was set to 3DES, so I set it to "none". However, the effect was opposite -- the performance dropped to 60±30 kB/s, and the transfers stall for about 25 seconds before any data comes down the line. I tried AES-128 and the throughput went UP to 160±5 kB/s. The rated speed of my line is 193 kB/s (it's a T1). Contrary to what I would think, stronger Ipsec encryption seems to improve throughput. Can anyone explain what might be going on here? Why would no encryption cause poor and highly variable performance, and cause transfers to stall? Why does AES-128 improve performance?

    Read the article

  • Routing data through VPN in linux

    - by Shadyabhi
    I think its a silly question but still here it goes.. Terminal Output: eth0 Link encap:Ethernet HWaddr 00:1c:c0:37:5e:25 inet addr:10.100.98.51 Bcast:10.100.98.255 Mask:255.255.255.0 inet6 addr: fe80::21c:c0ff:fe37:5e25/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:29677 errors:0 dropped:0 overruns:0 frame:0 TX packets:5209 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:3179007 (3.1 MB) TX bytes:610142 (610.1 KB) Memory:e0380000-e03a0000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:76 errors:0 dropped:0 overruns:0 frame:0 TX packets:76 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:9555 (9.5 KB) TX bytes:9555 (9.5 KB) vpn_0 Link encap:Ethernet HWaddr 00:ac:39:95:a1:16 inet6 addr: fe80::2ac:39ff:fe95:a116/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1786 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:128597 (128.5 KB) TX bytes:468 (468.0 B) Actually, I followed this tutorial to setup the PacketiX VPN on ubuntu. Now, how do I actually use this VPN? Terminal Output: shadyabhi@shadyabhi-desktop:~$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default 10.100.98.4 0.0.0.0 UG 0 0 0 eth0 shadyabhi@shadyabhi-desktop:~$ As told in tutorial, if I do route del default route add default dev vpn_0 I am not able to surf the internet. And I get the route command output as: root@shadyabhi-desktop:/home/shadyabhi# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.100.98.0 * 255.255.255.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 default * 0.0.0.0 U 0 0 0 vpn_0 root@shadyabhi-desktop:/home/shadyabhi# I know I am not able to route the traffic properly. How do i do that?

    Read the article

  • Is ffmpeg incorrectly interpreting .aif files?

    - by marue
    Being on an Ubuntu 10.04 server i installed the ffmpeg packages with apt. ffmpeg is working afterwards, and doing as it should. Almost. For testing purposes i uploaded a few audiofiles. One of them, an aif file, is not being correctly interpreted. While on my workhorse (Mac SnowLeopard) ffmpeg tells the format as Stream #0.0: Audio: pcm_s24be, 44100 Hz, 2 channels, s32, 2116 kb/s my Ubuntu server says it is: Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s which is the wrong bitdepth. Ubuntu then fails to convert the file with the error message [pcm_s24be @ 0xcd4b580]invalid PCM packet Error while decoding stream #0.0 which certainly is not true. The file is perfectly valid. Are there any know issues for ffmpeg interpreting the aif format? How can i find out which version of the aif-codec ffmpeg is using? Any ideas how to approach this issue? ffprobe output: FFprobe version SVN-r20090707, Copyright (c) 2007-2009 Stefano Sabatini libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 built on Jan 20 2010 00:13:01, gcc: 4.4.3 20100116 (prerelease) Input #0, aiff, from 'testfile.aif': Duration: 00:00:04.00, start: 0.000000, bitrate: 2117 kb/s Stream #0.0: Audio: pcm_s24be, 44100 Hz, stereo, s16, 2116 kb/s update 2: Forcing the conversion with -sample_fmt s32 doesn't change anything. Strange thing is: Even without using -sample_fmt s32 i just realized that the conversion is working and creates valid audiofiles. There just is the error message from above.

    Read the article

  • How do I convert a video to GIF using ffmpeg, with reasonable quality?

    - by Kamil Hismatullin
    I'm converting .flv movie to .gif file with ffmpeg. ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif It works great, but output gif file has a very law quality. Any ideas how can I improve quality of converted gif? Output of command: $ ffmpeg -i input.flv -ss 00:00:00.000 -pix_fmt rgb24 -r 10 -s 320x240 -t 00:00:10.000 output.gif ffmpeg version 0.8.5-6:0.8.5-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Jan 24 2013 14:52:53 with gcc 4.7.2 *** THIS PROGRAM IS DEPRECATED *** This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.flv': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 Duration: 00:00:18.85, start: 0.000000, bitrate: 3098 kb/s Stream #0.0(und): Video: h264 (High), yuv420p, 1280x720, 2905 kb/s, 25 fps, 25 tbr, 50 tbn, 50 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream #0.1(und): Audio: aac, 44100 Hz, stereo, s16, 192 kb/s Metadata: creation_time : 2013-02-14 04:00:07 [buffer @ 0x92a8ea0] w:1280 h:720 pixfmt:yuv420p [scale @ 0x9215100] w:1280 h:720 fmt:yuv420p -> w:320 h:240 fmt:rgb24 flags:0x4 Output #0, gif, to 'output.gif': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2013-02-14 04:00:07 encoder : Lavf53.21.1 Stream #0.0(und): Video: rawvideo, rgb24, 320x240, q=2-31, 200 kb/s, 90k tbn, 10 tbc Metadata: creation_time : 1970-01-01 00:00:00 Stream mapping: Stream #0.0 -> #0.0 Press ctrl-c to stop encoding frame= 101 fps= 32 q=0.0 Lsize= 8686kB time=10.10 bitrate=7045.0kbits/s dup=0 drop=149 video:22725kB audio:0kB global headers:0kB muxing overhead -61.778676% Thanks.

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • Route web traffic through a separate iterface

    - by tkane
    I'd like to route web traffic through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Route web browsing through a separate iterface

    - by tkane
    I'd like to route web browsing through the wlan0 interface and the rest through eth1. Can you please help me with the iptables commands to achieve this. Below is my configuration. Thank you :) Edit: This is about desktop configuration not a web server set up. Basically I want to use one of my connections to browse the web and the other one for everything else. ifconfig: eth1 Link encap:Ethernet HWaddr 00:1d:09:59:80:70 inet addr:192.168.2.164 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::21d:9ff:fe59:8070/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:33 errors:0 dropped:0 overruns:0 frame:0 TX packets:41 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4771 (4.7 KB) TX bytes:7081 (7.0 KB) Interrupt:17 wlan0 Link encap:Ethernet HWaddr 00:1c:bf:90:8a:6d inet addr:192.168.1.70 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21c:bfff:fe90:8a6d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:77 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:14256 (14.2 KB) TX bytes:14764 (14.7 KB) route: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 * 255.255.255.0 U 1 0 0 eth1 192.168.1.0 * 255.255.255.0 U 2 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 default adsl 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • Using JSON.NET for dynamic JSON parsing

    - by Rick Strahl
    With the release of ASP.NET Web API as part of .NET 4.5 and MVC 4.0, JSON.NET has effectively pushed out the .NET native serializers to become the default serializer for Web API. JSON.NET is vastly more flexible than the built in DataContractJsonSerializer or the older JavaScript serializer. The DataContractSerializer in particular has been very problematic in the past because it can't deal with untyped objects for serialization - like values of type object, or anonymous types which are quite common these days. The JavaScript Serializer that came before it actually does support non-typed objects for serialization but it can't do anything with untyped data coming in from JavaScript and it's overall model of extensibility was pretty limited (JavaScript Serializer is what MVC uses for JSON responses). JSON.NET provides a robust JSON serializer that has both high level and low level components, supports binary JSON, JSON contracts, Xml to JSON conversion, LINQ to JSON and many, many more features than either of the built in serializers. ASP.NET Web API now uses JSON.NET as its default serializer and is now pulled in as a NuGet dependency into Web API projects, which is great. Dynamic JSON Parsing One of the features that I think is getting ever more important is the ability to serialize and deserialize arbitrary JSON content dynamically - that is without mapping the JSON captured directly into a .NET type as DataContractSerializer or the JavaScript Serializers do. Sometimes it isn't possible to map types due to the differences in languages (think collections, dictionaries etc), and other times you simply don't have the structures in place or don't want to create them to actually import the data. If this topic sounds familiar - you're right! I wrote about dynamic JSON parsing a few months back before JSON.NET was added to Web API and when Web API and the System.Net HttpClient libraries included the System.Json classes like JsonObject and JsonArray. With the inclusion of JSON.NET in Web API these classes are now obsolete and didn't ship with Web API or the client libraries. I re-linked my original post to this one. In this post I'll discus JToken, JObject and JArray which are the dynamic JSON objects that make it very easy to create and retrieve JSON content on the fly without underlying types. Why Dynamic JSON? So, why Dynamic JSON parsing rather than strongly typed parsing? Since applications are interacting more and more with third party services it becomes ever more important to have easy access to those services with easy JSON parsing. Sometimes it just makes lot of sense to pull just a small amount of data out of large JSON document received from a service, because the third party service isn't directly related to your application's logic most of the time - and it makes little sense to map the entire service structure in your application. For example, recently I worked with the Google Maps Places API to return information about businesses close to me (or rather the app's) location. The Google API returns a ton of information that my application had no interest in - all I needed was few values out of the data. Dynamic JSON parsing makes it possible to map this data, without having to map the entire API to a C# data structure. Instead I could pull out the three or four values I needed from the API and directly store it on my business entities that needed to receive the data - no need to map the entire Maps API structure. Getting JSON.NET The easiest way to use JSON.NET is to grab it via NuGet and add it as a reference to your project. You can add it to your project with: PM> Install-Package Newtonsoft.Json From the Package Manager Console or by using Manage NuGet Packages in your project References. As mentioned if you're using ASP.NET Web API or MVC 4 JSON.NET will be automatically added to your project. Alternately you can also go to the CodePlex site and download the latest version including source code: http://json.codeplex.com/ Creating JSON on the fly with JObject and JArray Let's start with creating some JSON on the fly. It's super easy to create a dynamic object structure with any of the JToken derived JSON.NET objects. The most common JToken derived classes you are likely to use are JObject and JArray. JToken implements IDynamicMetaProvider and so uses the dynamic  keyword extensively to make it intuitive to create object structures and turn them into JSON via dynamic object syntax. Here's an example of creating a music album structure with child songs using JObject for the base object and songs and JArray for the actual collection of songs:[TestMethod] public void JObjectOutputTest() { // strong typed instance var jsonObject = new JObject(); // you can explicitly add values here using class interface jsonObject.Add("Entered", DateTime.Now); // or cast to dynamic to dynamically add/read properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; album.Artist = "AC/DC"; album.YearReleased = 1976; album.Songs = new JArray() as dynamic; dynamic song = new JObject(); song.SongName = "Dirty Deeds Done Dirt Cheap"; song.SongLength = "4:11"; album.Songs.Add(song); song = new JObject(); song.SongName = "Love at First Feel"; song.SongLength = "3:10"; album.Songs.Add(song); Console.WriteLine(album.ToString()); } This produces a complete JSON structure: { "Entered": "2012-08-18T13:26:37.7137482-10:00", "AlbumName": "Dirty Deeds Done Dirt Cheap", "Artist": "AC/DC", "YearReleased": 1976, "Songs": [ { "SongName": "Dirty Deeds Done Dirt Cheap", "SongLength": "4:11" }, { "SongName": "Love at First Feel", "SongLength": "3:10" } ] } Notice that JSON.NET does a nice job formatting the JSON, so it's easy to read and paste into blog posts :-). JSON.NET includes a bunch of configuration options that control how JSON is generated. Typically the defaults are just fine, but you can override with the JsonSettings object for most operations. The important thing about this code is that there's no explicit type used for holding the values to serialize to JSON. Rather the JSON.NET objects are the containers that receive the data as I build up my JSON structure dynamically, simply by adding properties. This means this code can be entirely driven at runtime without compile time restraints of structure for the JSON output. Here I use JObject to create a album 'object' and immediately cast it to dynamic. JObject() is kind of similar in behavior to ExpandoObject in that it allows you to add properties by simply assigning to them. Internally, JObject values are stored in pseudo collections of key value pairs that are exposed as properties through the IDynamicMetaObject interface exposed in JSON.NET's JToken base class. For objects the syntax is very clean - you add simple typed values as properties. For objects and arrays you have to explicitly create new JObject or JArray, cast them to dynamic and then add properties and items to them. Always remember though these values are dynamic - which means no Intellisense and no compiler type checking. It's up to you to ensure that the names and values you create are accessed consistently and without typos in your code. Note that you can also access the JObject instance directly (not as dynamic) and get access to the underlying JObject type. This means you can assign properties by string, which can be useful for fully data driven JSON generation from other structures. Below you can see both styles of access next to each other:// strong type instance var jsonObject = new JObject(); // you can explicitly add values here jsonObject.Add("Entered", DateTime.Now); // expando style instance you can just 'use' properties dynamic album = jsonObject; album.AlbumName = "Dirty Deeds Done Dirt Cheap"; JContainer (the base class for JObject and JArray) is a collection so you can also iterate over the properties at runtime easily:foreach (var item in jsonObject) { Console.WriteLine(item.Key + " " + item.Value.ToString()); } The functionality of the JSON objects are very similar to .NET's ExpandObject and if you used it before, you're already familiar with how the dynamic interfaces to the JSON objects works. Importing JSON with JObject.Parse() and JArray.Parse() The JValue structure supports importing JSON via the Parse() and Load() methods which can read JSON data from a string or various streams respectively. Essentially JValue includes the core JSON parsing to turn a JSON string into a collection of JsonValue objects that can be then referenced using familiar dynamic object syntax. Here's a simple example:public void JValueParsingTest() { var jsonString = @"{""Name"":""Rick"",""Company"":""West Wind"", ""Entered"":""2012-03-16T00:03:33.245-10:00""}"; dynamic json = JValue.Parse(jsonString); // values require casting string name = json.Name; string company = json.Company; DateTime entered = json.Entered; Assert.AreEqual(name, "Rick"); Assert.AreEqual(company, "West Wind"); } The JSON string represents an object with three properties which is parsed into a JObject class and cast to dynamic. Once cast to dynamic I can then go ahead and access the object using familiar object syntax. Note that the actual values - json.Name, json.Company, json.Entered - are actually of type JToken and I have to cast them to their appropriate types first before I can do type comparisons as in the Asserts at the end of the test method. This is required because of the way that dynamic types work which can't determine the type based on the method signature of the Assert.AreEqual(object,object) method. I have to either assign the dynamic value to a variable as I did above, or explicitly cast ( (string) json.Name) in the actual method call. The JSON structure can be much more complex than this simple example. Here's another example of an array of albums serialized to JSON and then parsed through with JsonValue():[TestMethod] public void JsonArrayParsingTest() { var jsonString = @"[ { ""Id"": ""b3ec4e5c"", ""AlbumName"": ""Dirty Deeds Done Dirt Cheap"", ""Artist"": ""AC/DC"", ""YearReleased"": 1976, ""Entered"": ""2012-03-16T00:13:12.2810521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/61kTaH-uZBL._AA115_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/gp/product/…ASIN=B00008BXJ4"", ""Songs"": [ { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Dirty Deeds Done Dirt Cheap"", ""SongLength"": ""4:11"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Love at First Feel"", ""SongLength"": ""3:10"" }, { ""AlbumId"": ""b3ec4e5c"", ""SongName"": ""Big Balls"", ""SongLength"": ""2:38"" } ] }, { ""Id"": ""7b919432"", ""AlbumName"": ""End of the Silence"", ""Artist"": ""Henry Rollins Band"", ""YearReleased"": 1992, ""Entered"": ""2012-03-16T00:13:12.2800521-10:00"", ""AlbumImageUrl"": ""http://ecx.images-amazon.com/images/I/51FO3rb1tuL._SL160_AA160_.jpg"", ""AmazonUrl"": ""http://www.amazon.com/End-Silence-Rollins-Band/dp/B0000040OX/ref=sr_1_5?ie=UTF8&qid=1302232195&sr=8-5"", ""Songs"": [ { ""AlbumId"": ""7b919432"", ""SongName"": ""Low Self Opinion"", ""SongLength"": ""5:24"" }, { ""AlbumId"": ""7b919432"", ""SongName"": ""Grip"", ""SongLength"": ""4:51"" } ] } ]"; JArray jsonVal = JArray.Parse(jsonString) as JArray; dynamic albums = jsonVal; foreach (dynamic album in albums) { Console.WriteLine(album.AlbumName + " (" + album.YearReleased.ToString() + ")"); foreach (dynamic song in album.Songs) { Console.WriteLine("\t" + song.SongName); } } Console.WriteLine(albums[0].AlbumName); Console.WriteLine(albums[0].Songs[1].SongName); } JObject and JArray in ASP.NET Web API Of course these types also work in ASP.NET Web API controller methods. If you want you can accept parameters using these object or return them back to the server. The following contrived example receives dynamic JSON input, and then creates a new dynamic JSON object and returns it based on data from the first:[HttpPost] public JObject PostAlbumJObject(JObject jAlbum) { // dynamic input from inbound JSON dynamic album = jAlbum; // create a new JSON object to write out dynamic newAlbum = new JObject(); // Create properties on the new instance // with values from the first newAlbum.AlbumName = album.AlbumName + " New"; newAlbum.NewProperty = "something new"; newAlbum.Songs = new JArray(); foreach (dynamic song in album.Songs) { song.SongName = song.SongName + " New"; newAlbum.Songs.Add(song); } return newAlbum; } The raw POST request to the server looks something like this: POST http://localhost/aspnetwebapi/samples/PostAlbumJObject HTTP/1.1User-Agent: FiddlerContent-type: application/jsonHost: localhostContent-Length: 88 {AlbumName: "Dirty Deeds",Songs:[ { SongName: "Problem Child"},{ SongName: "Squealer"}]} and the output that comes back looks like this: {  "AlbumName": "Dirty Deeds New",  "NewProperty": "something new",  "Songs": [    {      "SongName": "Problem Child New"    },    {      "SongName": "Squealer New"    }  ]} The original values are echoed back with something extra appended to demonstrate that we're working with a new object. When you receive or return a JObject, JValue, JToken or JArray instance in a Web API method, Web API ignores normal content negotiation and assumes your content is going to be received and returned as JSON, so effectively the parameter and result type explicitly determines the input and output format which is nice. Dynamic to Strong Type Mapping You can also map JObject and JArray instances to a strongly typed object, so you can mix dynamic and static typing in the same piece of code. Using the 2 Album jsonString shown earlier, the code below takes an array of albums and picks out only a single album and casts that album to a static Album instance.[TestMethod] public void JsonParseToStrongTypeTest() { JArray albums = JArray.Parse(jsonString) as JArray; // pick out one album JObject jalbum = albums[0] as JObject; // Copy to a static Album instance Album album = jalbum.ToObject<Album>(); Assert.IsNotNull(album); Assert.AreEqual(album.AlbumName,jalbum.Value<string>("AlbumName")); Assert.IsTrue(album.Songs.Count > 0); } This is pretty damn useful for the scenario I mentioned earlier - you can read a large chunk of JSON and dynamically walk the property hierarchy down to the item you want to access, and then either access the specific item dynamically (as shown earlier) or map a part of the JSON to a strongly typed object. That's very powerful if you think about it - it leaves you in total control to decide what's dynamic and what's static. Strongly typed JSON Parsing With all this talk of dynamic let's not forget that JSON.NET of course also does strongly typed serialization which is drop dead easy. Here's a simple example on how to serialize and deserialize an object with JSON.NET:[TestMethod] public void StronglyTypedSerializationTest() { // Demonstrate deserialization from a raw string var album = new Album() { AlbumName = "Dirty Deeds Done Dirt Cheap", Artist = "AC/DC", Entered = DateTime.Now, YearReleased = 1976, Songs = new List<Song>() { new Song() { SongName = "Dirty Deeds Done Dirt Cheap", SongLength = "4:11" }, new Song() { SongName = "Love at First Feel", SongLength = "3:10" } } }; // serialize to string string json2 = JsonConvert.SerializeObject(album,Formatting.Indented); Console.WriteLine(json2); // make sure we can serialize back var album2 = JsonConvert.DeserializeObject<Album>(json2); Assert.IsNotNull(album2); Assert.IsTrue(album2.AlbumName == "Dirty Deeds Done Dirt Cheap"); Assert.IsTrue(album2.Songs.Count == 2); } JsonConvert is a high level static class that wraps lower level functionality, but you can also use the JsonSerializer class, which allows you to serialize/parse to and from streams. It's a little more work, but gives you a bit more control. The functionality available is easy to discover with Intellisense, and that's good because there's not a lot in the way of documentation that's actually useful. Summary JSON.NET is a pretty complete JSON implementation with lots of different choices for JSON parsing from dynamic parsing to static serialization, to complex querying of JSON objects using LINQ. It's good to see this open source library getting integrated into .NET, and pushing out the old and tired stock .NET parsers so that we finally have a bit more flexibility - and extensibility - in our JSON parsing. Good to go! Resources Sample Test Project http://json.codeplex.com/© Rick Strahl, West Wind Technologies, 2005-2012Posted in .NET  Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >