Search Results

Search found 2521 results on 101 pages for 'joe smith'.

Page 99/101 | < Previous Page | 95 96 97 98 99 100 101  | Next Page >

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • T4 Performance Counters explained

    - by user13346607
    Now that T4 is out for a few month some people might have wondered what details of the new pipeline you can monitor. A "cpustat -h" lists a lot of events that can be monitored, and only very few are self-explanatory. I will try to give some insight on all of them, some of these "PIC events" require an in-depth knowledge of T4 pipeline. Over time I will try to explain these, for the time being these events should simply be ignored. (Side note: some counters changed from tape-out 1.1 (*only* used in the T4 beta program) to tape-out 1.2 (used in the systems shipping today) The table only lists the tape-out 1.2 counters) 0 0 1 1058 6033 Oracle Microelectronics 50 14 7077 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} pic name (cpustat) Prose Comment Sel-pipe-drain-cycles, Sel-0-[wait|ready], Sel-[1,2] Sel-0-wait counts cycles a strand waits to be selected. Some reasons can be counted in detail; these are: Sel-0-ready: Cycles a strand was ready but not selected, that can signal pipeline oversubscription Sel-1: Cycles only one instruction or µop was selected Sel-2: Cycles two instructions or µops were selected Sel-pipe-drain-cycles: cf. PRM footnote 8 to table 10.2 Pick-any, Pick-[0|1|2|3] Cycles one, two, three, no or at least one instruction or µop is picked Instr_FGU_crypto Number of FGU or crypto instructions executed on that vcpu Instr_ld dto. for load Instr_st dto. for store SPR_ring_ops dto. for SPR ring ops Instr_other dto. for all other instructions not listed above, PRM footnote 7 to table 10.2 lists the instructions Instr_all total number of instructions executed on that vcpu Sw_count_intr Nr of S/W count instructions on that vcpu (sethi %hi(fc000),%g0 (whatever that is))  Atomics nr of atomic ops, which are LDSTUB/a, CASA/XA, and SWAP/A SW_prefetch Nr of PREFETCH or PREFETCHA instructions Block_ld_st Block loads or store on that vcpu IC_miss_nospec, IC_miss_[L2_or_L3|local|remote]\ _hit_nospec Various I$ misses, distinguished by where they hit. All of these count per thread, but only primary events: T4 counts only the first occurence of an I$ miss on a core for a certain instruction. If one strand misses in I$ this miss is counted, but if a second strand on the same core misses while the first miss is being resolved, that second miss is not counted This flavour of I$ misses counts only misses that are caused by instruction that really commit (note the "_nospec") BTC_miss Branch target cache miss ITLB_miss ITLB misses (synchronously counted) ITLB_miss_asynch dto. but asynchronously [I|D]TLB_fill_\ [8KB|64KB|4MB|256MB|2GB|trap] H/W tablewalk events that fill ITLB or DTLB with translation for the corresponding page size. The “_trap” event occurs if the HWTW was not able to fill the corresponding TLB IC_mtag_miss, IC_mtag_miss_\ [ptag_hit|ptag_miss|\ ptag_hit_way_mismatch] I$ micro tag misses, with some options for drill down Fetch-0, Fetch-0-all fetch-0 counts nr of cycles nothing was fetched for this particular strand, fetch-0-all counts cycles nothing was fetched for all strands on a core Instr_buffer_full Cycles the instruction buffer for a strand was full, thereby preventing any fetch BTC_targ_incorrect Counts all occurences of wrongly predicted branch targets from the BTC [PQ|ROB|LB|ROB_LB|SB|\ ROB_SB|LB_SB|RB_LB_SB|\ DTLB_miss]\ _tag_wait ST_q_tag_wait is listed under sl=20. These counters monitor pipeline behaviour therefore they are not strand specific: PQ_...: cycles Rename stage waits for a Pick Queue tag (might signal memory bound workload for single thread mode, cf. Mail from Richard Smith) ROB_...: cycles Select stage waits for a ROB (ReOrderBuffer) tag LB_...: cycles Select stage waits for a Load Buffer tag SB_...: cycles Select stage waits for Store Buffer tag combinations of the above are allowed, although some of these events can overlap, the counter will only be incremented once per cycle if any of these occur DTLB_...: cycles load or store instructions wait at Pick stage for a DTLB miss tag [ID]TLB_HWTW_\ [L2_hit|L3_hit|L3_miss|all] Counters for HWTW accesses caused by either DTLB or ITLB misses. Canbe further detailed by where they hit IC_miss_L2_L3_hit, IC_miss_local_remote_remL3_hit, IC_miss I$ prefetches that were dropped because they either miss in L2$ or L3$ This variant counts misses regardless if the causing instruction commits or not DC_miss_nospec, DC_miss_[L2_L3|local|remote_L3]\ _hit_nospec D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters DTLB_miss_asynch counts all DTLB misses asynchronously, there is no way to count them synchronously DC_pref_drop_DC_hit, SW_pref_drop_[DC_hit|buffer_full] L1-D$ h/w prefetches that were dropped because of a D$ hit, counted per core. The others count software prefetches per strand [Full|Partial]_RAW_hit_st_[buf|q] Count events where a load wants to get data that has not yet been stored, i. e. it is still inside the pipeline. The data might be either still in the store buffer or in the store queue. If the load's data matches in the SB and in the store queue the data in buffer takes precedence of course since it is younger [IC|DC]_evict_invalid, [IC|DC|L1]_snoop_invalid, [IC|DC|L1]_invalid_all Counter for invalidated cache evictions per core St_q_tag_wait Number of cycles pipeline waits for a store queue tag, of course counted per core Data_pref_[drop_L2|drop_L3|\ hit_L2|hit_L3|\ hit_local|hit_remote] Data prefetches that can be further detailed by either why they were dropped or where they did hit St_hit_[L2|L3], St_L2_[local|remote]_C2C, St_local, St_remote Store events distinguished by where they hit or where they cause a L2 cache-to-cache transfer, i.e. either a transfer from another L2$ on the same die or from a different die DC_miss, DC_miss_\ [L2_L3|local|remote]_hit D$ misses either in general or detailed by where they hit cf. the explanation for the IC_miss in two flavours for an explanation of _nospec and the reasoning for two DC_miss counters L2_[clean|dirty]_evict Per core clean or dirty L2$ evictions L2_fill_buf_full, L2_wb_buf_full, L2_miss_buf_full Per core L2$ buffer events, all count number of cycles that this state was present L2_pipe_stall Per core cycles pipeline stalled because of L2$ Branches Count branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_taken Counts taken branches (Tcc, DONE, RETRY, and SIT are not counted as branches) Br_mispred, Br_dir_mispred, Br_trg_mispred, Br_trg_mispred_\ [far_tbl|indir_tbl|ret_stk] Counter for various branch misprediction events.  Cycles_user counts cycles, attribute setting hpriv, nouser, sys controls addess space to count in Commit-[0|1|2], Commit-0-all, Commit-1-or-2 Number of times either no, one, or two µops commit for a strand. Commit-0-all counts number of times no µop commits for the whole core, cf. footnote 11 to table 10.2 in PRM for a more detailed explanation on how this counters interacts with the privilege levels

    Read the article

  • Creating a Training Lab on Windows Azure

    - by Michael Stephenson
    Originally posted on: http://geekswithblogs.net/michaelstephenson/archive/2013/06/17/153149.aspxThis week we are preparing for a training course that Alan Smith will be running for the support teams at one of my customers around Windows Azure. In order to facilitate the training lab we have a few prerequisites we need to handle. One of the biggest ones is that although the support team all have MSDN accounts the local desktops they work on are not ideal for running most of the labs as we want to give them some additional developer background training around Azure. Some recent Azure announcements really help us in this area: MSDN software can now be used on Azure VM You don't pay for Azure VM's when they are no longer used  Since the support team only have limited experience of Windows Azure and the organisation also have an Enterprise Agreement we decided it would be best value for money to spin up a training lab in a subscription on the EA and then we can turn the machines off when we are done. At the same time we would be able to spin them back up when the users need to do some additional lab work once the training course is completed. In order to achieve this I wanted to create a powershell script which would setup my training lab. The aim was to create 18 VM's which would be based on a prebuilt template with Visual Studio and the Azure development tools. The script I used is described below The Start & Variables The below text will setup the powershell environment and some variables which I will use elsewhere in the script. It will also import the Azure Powershell cmdlets. You can see below that I will need to download my publisher settings file and know some details from my Azure account. At this point I will assume you have a basic understanding of Azure & Powershell so already know how to do this. Set-ExecutionPolicy Unrestrictedcls $startTime = get-dateImport-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"# Azure Publisher Settings $azurePublisherSettings = '<Your settings file>.publishsettings'  # Subscription Details $subscriptionName = "<Your subscription name>" $defaultStorageAccount = "<Your default storage account>"  # Affinity Group Details $affinityGroup = '<Your affinity group>' $dataCenter = 'West Europe' # From Get-AzureLocation  # VM Details $baseVMName = 'TRN' $adminUserName = '<Your admin username>' $password = '<Your admin password>' $size = 'Medium' $vmTemplate = '<The name of your VM template image>' $rdpFilePath = '<File path to save RDP files to>' $machineSettingsPath = '<File path to save machine info to>'    Functions In the next section of the script I have some functions which are used to perform certain actions. The first is called CreateVM. This will do the following actions: If the VM already exists it will be deleted Create the cloud service Create the VM from the template I have created Add an endpoint so we can RDP to them all over the same port Download the RDP file so there is a short cut the trainees can easily access the machine via Write settings for the machine to a log file  function CreateVM($machineNo) { # Specify a name for the new VM $machineName = "$baseVMName-$machineNo" Write-Host "Creating VM: $machineName"       # Get the Azure VM Image      $myImage = Get-AzureVMImage $vmTemplate   #If the VM already exists delete and re-create it $existingVm = Get-AzureVM -Name $machineName -ServiceName $serviceName if($existingVm -ne $null) { Write-Host "VM already exists so deleting it" Remove-AzureVM -Name $machineName -ServiceName $serviceName }   "Creating Service" $serviceName = "bupa-azure-train-$machineName" Remove-AzureService -Force -ServiceName $serviceName New-AzureService -Location $dataCenter -ServiceName $serviceName   Write-Host "Creating VM: $machineName" New-AzureQuickVM -Windows -name $machineName -ServiceName $serviceName -ImageName $myImage.ImageName -InstanceSize $size -AdminUsername $adminUserName -Password $password  Write-Host "Updating the RDP endpoint for $machineName" Get-AzureVM -name $machineName -ServiceName $serviceName ` | Add-AzureEndpoint -Name RDP -Protocol TCP -LocalPort 3389 -PublicPort 550 ` | Update-AzureVM    Write-Host "Get the RDP File for machine $machineName" $machineRDPFilePath = "$rdpFilePath\$machineName.rdp" Get-AzureRemoteDesktopFile -name $machineName -ServiceName $serviceName -LocalPath "$machineRDPFilePath"   WriteMachineSettings "$machineName" "$serviceName" }    The delete machine settings function is used to delete the log file before we start re-running the process.  function DeleteMachineSettings() { Write-Host "Deleting the machine settings output file" [System.IO.File]::Delete("$machineSettingsPath"); }    The write machine settings function will get the VM and then record its details to the log file. The importance of the log file is that I can easily provide the information for all of the VM's to our infrastructure team to be able to configure access to all of the VM's    function WriteMachineSettings([string]$vmName, [string]$vmServiceName) { Write-Host "Writing to the machine settings output file"   $vm = Get-AzureVM -name $vmName -ServiceName $vmServiceName $vmEndpoint = Get-AzureEndpoint -VM $vm -Name RDP   $sb = new-object System.Text.StringBuilder $sb.Append("Service Name: "); $sb.Append($vm.ServiceName); $sb.Append(", "); $sb.Append("VM: "); $sb.Append($vm.Name); $sb.Append(", "); $sb.Append("RDP Public Port: "); $sb.Append($vmEndpoint.Port); $sb.Append(", "); $sb.Append("Public DNS: "); $sb.Append($vmEndpoint.Vip); $sb.AppendLine(""); [System.IO.File]::AppendAllText($machineSettingsPath, $sb.ToString());  } # end functions    Rest of Script In the rest of the script it is really just the bit that orchestrates the actions we want to happen. It will load the publisher settings, select the Azure subscription and then loop around the CreateVM function and create 16 VM's  Import-AzurePublishSettingsFile $azurePublisherSettings Set-AzureSubscription -SubscriptionName $subscriptionName -CurrentStorageAccount $defaultStorageAccount Select-AzureSubscription -SubscriptionName $subscriptionName  DeleteMachineSettings    "Starting creating Bupa International Azure Training Lab" $numberOfVMs = 16  for ($index=1; $index -le $numberOfVMs; $index++) { $vmNo = "$index" CreateVM($vmNo); }    "Finished creating Bupa International Azure Training Lab" # Give it a Minute Start-Sleep -s 60  $endTime = get-date "Script run time " + ($endTime - $startTime)    Conclusion As you can see there is nothing too fancy about this script but in our case of creating a small isolated training lab which is not connected to our corporate network then we can easily use this to provision the lab. Im sure if this is of use to anyone you can easily modify it to do other things with the lab environment too. A couple of points to note are that there are some soft limits in Azure about the number of cores and services your subscription can use. You may need to contact the Azure support team to be able to increase this limit. In terms of the real business value of this approach, it was not possible to use the existing desktops to do the training on, and getting some internal virtual machines would have been relatively expensive and time consuming for our ops team to do. With the Azure option we are able to spin these machines up for a temporary period during the training course and then throw them away when we are done. We expect the costing of this test lab to be very small, especially considering we have EA pricing. As a ball park I think my 18 lab VM training environment will cost in the region of $80 per day on our EA. This is a fraction of the cost of the creation of a single VM on premise.

    Read the article

  • PASS: Election Changes for 2011

    - by Bill Graziano
    Last year after the election, the PASS Board created an Election Review Committee.  This group was charged with reviewing our election procedures and making suggestions to improve the process.  You can read about the formation of the group and review some of the intermediate work on the site – especially in the forums. I was one of the members of the group along with Joe Webb (Chair), Lori Edwards, Brian Kelley, Wendy Pastrick, Andy Warren and Allen White.  This group worked from October to April on our election process.  Along the way we: Interviewed interested parties including former NomCom members, Board candidates and anyone else that came forward. Held a session at the Summit to allow interested parties to discuss the issues Had numerous conference calls and worked through the various topics I can’t thank these people enough for the work they did.  They invested a tremendous number of hours thinking, talking and writing about our elections.  I’m proud to say I was a member of this group and thoroughly enjoyed working with everyone (even if I did finally get tired of all the calls.) The ERC delivered their recommendations to the PASS Board prior to our May Board meeting.  We reviewed those and made a few modifications.  I took their recommendations and rewrote them as procedures while incorporating those changes.  Their original recommendations as well as our final document are posted at the ERC documents page.  Please take a second and read them BEFORE we start the elections.  If you have any questions please post them in the forums on the ERC site. (My final document includes a change log at the end that I decided to leave in.  If you want to know which areas to pay special attention to that’s a good start.) Many of those recommendations were already posted in the forums or in the blogs of individual ERC members.  Hopefully nothing in the ERC document is too surprising. In this post I’m going to walk through some of the key changes and talk about what I remember from both ERC and Board discussions.  I’ll pay a little extra attention to things the Board changed from the ERC.  I’d also encourage any of the Board or ERC members to blog their thoughts on this. The Nominating Committee will continue to exist.  Personally, I was curious to see what the non-Board ERC members would think about the NomCom.  There was broad agreement that a group to vet candidates had value to the organization. The NomCom will be composed of five members.  Two will be Board members and three will be from the membership at large.  The only requirement for the three community members is that you’ve volunteered in some way (and volunteering is defined very broadly).  We expect potential at-large NomCom members to participate in a forum on the PASS site to answer questions from the other PASS members. We’re going to hold an election to determine the three community members.  It will be closer to voting for Summit sessions than voting for Board members.  That means there won’t be multiple dedicated emails.  If you’re at all paying attention it will be easy to participate.  Personally I wanted it easy for those that cared to participate but not overwhelm those that didn’t care.  I think this strikes a good balance. There’s also a clause that in order to be considered a winner in this NomCom election, you must receive 10 votes.  This is something I suggested.  I have no idea how popular the NomCom election is going to be.  I just wanted a fallback that if no one participated and some random person got in with one or two votes.  Any open slots will be filled by the NomCom chair (usually the PASS Immediate Past President).  My assumption is that they would probably take the next highest vote getters unless they were throwing flames in the forums or clearly unqualified.  As a final check, the Board still approves the final NomCom. The NomCom is going to rank candidates instead of rating them.  This has interesting implications.  This was championed by another ERC member and I’m hoping they write something about it.  This will really force the NomCom to make decisions between candidates.  You can’t just rate everyone a 3 and be done with it.  It may also make candidates appear further apart than they actually are.  I’m looking forward talking with the NomCom after this election and getting their feedback on this. The PASS Board added an option to remove a candidate with a unanimous vote of the NomCom.  This was primarily put in place to handle people that lied on their application or had a criminal background or some other unusual situation and we figured it out. We list an explicit goal of three candidate per open slot. We also wanted an easy way to find the NomCom candidate rankings from the ballot.  Hopefully this will satisfy those that want a broad candidate pool and those that want the NomCom to identify the most qualified candidates. The primary spokesperson for the NomCom is the committee chair.  After the issues around the election last year we didn’t have a good communication plan in place.  We should have and that was a failure on the part of the Board.  If there is criticism of the election this year I hope that falls squarely on the Board.  The community members of the NomCom shouldn’t be fielding complaints over the election process.  That said, the NomCom is ranking candidates and we are forcing them to rank some lower than others.  I’m sure you’ll each find someone that you think should have been ranked differently.  I also want to highlight one other change to the process that we started last year and isn’t included in these documents.  I think the candidate forums on the PASS site were tremendously helpful last year in helping people to find out more about candidates.  That gives our members a way to ask hard questions of the candidates and publicly see their answers. This year we have two important groups to fill.  The first is the NomCom.  We need three people from our membership to step up and fill this role.  It won’t be easy.  You will have to make subjective rankings of your fellow community members.  Your actions will be important in deciding who the future leaders of PASS will be.  There’s a 50/50 chance that one of the people you interview will be the President of PASS someday.  This is not a responsibility to be taken lightly. The second is the slate of candidates.  If you’ve ever thought about running for the Board this is the year.  We’ve never had nine candidates on the ballot before.  Your chance of making it through the NomCom are higher than in any previous year.  Unfortunately the more of you that run, the more of you that will lose in the election.  And hopefully that competition will mean more community involvement and better Board members for PASS. Is this the end of changes to the election process?  It isn’t.  Every year that I’ve been on the Board the election process has changed.  Some years there have been small changes and some years there have been large changes.  After this election we’ll look at how the process worked and decide what steps to take – just like we do every year.

    Read the article

  • Developing Mobile Applications: Web, Native, or Hybrid?

    - by Michelle Kimihira
    Authors: Joe Huang, Senior Principal Product Manager, Oracle Mobile Application Development Framework  and Carlos Chang, Senior Principal Product Director The proliferation of mobile devices and platforms represents a game-changing technology shift on a number of levels. Companies must decide not only the best strategic use of mobile platforms, but also how to most efficiently implement them. Inevitably, this conversation devolves to the developers, who face the task of developing and supporting mobile applications—not a simple task in light of the number of devices and platforms. Essentially, developers can choose from the following three different application approaches, each with its own set of pros and cons. Native Applications: This refers to apps built for and installed on a specific platform, such as iOS or Android, using a platform-specific software development kit (SDK).  For example, apps for Apple’s iPhone and iPad are designed to run specifically on iOS and are written in Xcode/Objective-C. Android has its own variation of Java, Windows uses C#, and so on.  Native apps written for one platform cannot be deployed on another. Native apps offer fast performance and access to native-device services but require additional resources to develop and maintain each platform, which can be expensive and time consuming. Mobile Web Applications: Unlike native apps, mobile web apps are not installed on the device; rather, they are accessed via a Web browser.  These are server-side applications that render HTML, typically adjusting the design depending on the type of device making the request.  There are no program coding constraints for writing server-side apps—they can be written in Java, C, PHP, etc., it doesn’t matter.  Instead, the server detects what type of mobile browser is pinging the server and adjusts accordingly. For example, it can deliver fully JavaScript and CSS-enabled content to smartphone browsers, while downgrading gracefully to basic HTML for feature phone browsers. Mobile apps work across platforms, but are limited to what you can do through a browser and require Internet connectivity. For certain types of applications, these constraints may not be an issue. Oracle supports mobile web applications via ADF Faces (for tablets) and ADF Mobile browser (Trinidad) for smartphone and feature phones. Hybrid Applications: As the name implies, hybrid apps combine technologies from native and mobile Web apps to gain the benefits each. For example, these apps are installed on a device, like their pure native app counterparts, while the user interface (UI) is based on HTML5.  This UI runs locally within the native container, which usually leverages the device’s browser engine.  The advantage of using HTML5 is a consistent, cross-platform UI that works well on most devices.  Combining this with the native container, which is installed on-device, provides mobile users with access to local device services, such as camera, GPS, and local device storage.  Native apps may offer greater flexibility in integrating with device native services.  However, since hybrid applications already provide device integrations that typical enterprise applications need, this is typically less of an issue.  The new Oracle ADF Mobile release is an HTML5 and Java hybrid framework that targets mobile app development to iOS and Android from one code base. So, Which is the Best Approach? The short answer is – the best choice depends on the type of application you are developing.  For instance, animation-intensive apps such as games would favor native apps, while hybrid applications may be better suited for enterprise mobile apps because they provide multi-platform support. Just for starters, the following issues must be considered when choosing a development path. Application Complexity: How complex is the application? A quick app that accesses a database or Web service for some data to display?  You can keep it simple, and a mobile Web app may suffice. However, for a mobile/field worker type of applications that supports mission critical functionality, hybrid or native applications are typically needed. Richness of User Interactivity: What type of user experience is required for the application?  Mobile browser-based app that’s optimized for mobile UI may suffice for quick lookup or productivity type of applications.  However, hybrid/native application would typically be required to deliver highly interactive user experiences needed for field-worker type of applications.  For example, interactive BI charts/graphs, maps, voice/email integration, etc.  In the most extreme case like gaming applications, native applications may be necessary to deliver the highly animated and graphically intensive user experience. Performance: What type of performance is required by the application functionality?  For instance, for real-time look up of data over the network, mobile app performance depends on network latency and server infrastructure capabilities.  If consistent performance is required, data would typically need to be cached, which is supported on hybrid or native applications only. Connectivity and Availability: What sort of connectivity will your application require? Does the app require Web access all the time in order to always retrieve the latest data from the server? Or do the requirements dictate offline support? While native and hybrid apps can be built to operate offline, Web mobile apps require Web connectivity. Multi-platform Requirements: The terms “consumerization of IT” and BYOD (bring your own device) effectively mean that the line between the consumer and the enterprise devices have become blurred. Employees are bringing their personal mobile devices to work and are often expecting that they work in the corporate network and access back-office applications.  Even if companies restrict access to the big dogs: (iPad, iPhone, Android phones and tablets, possibly Windows Phone and tablets), trying to support each platform natively will require increasing resources and domain expertise with each new language/platform. And let’s not forget the maintenance costs, involved in upgrading new versions of each platform.   Where multi-platform support is needed, Web mobile or hybrid apps probably have the advantage. Going native, and trying to support multiple operating systems may be cost prohibitive with existing resources and developer skills. Device-Services Access:  If your app needs to access local device services, such as the camera, contacts app, accelerometer, etc., then your choices are limited to native or hybrid applications.   Fragmentation: Apple controls Apple iOS and the only concern is what version iOS is running on any given device.   Not so Android, which is open source. There are many, many versions and variants of Android running on different devices, which can be a nightmare for app developers trying to support different devices running different flavors of Android.  (Is it an Amazon Kindle Fire? a Samsung Galaxy?  A Barnes & Noble Nook?) This is a nightmare scenario for native apps—on the other hand, a mobile Web or hybrid app, when properly designed, can shield you from these complexities because they are based on common frameworks.  Resources: How many developers can you dedicate to building and supporting mobile application development?  What are their existing skills sets?  If you’re considering native application development due to the complexity of the application under development, factor the costs of becoming proficient on a each platform’s OS and programming language. Add another platform, and that’s another language, another SDK. On the other side of the equation, Web mobile or hybrid applications are simpler to make, and readily support more platforms, but there may be performance trade-offs. Conclusion This only scratches the surface. However, I hope to have suggested some food for thought in choosing your mobile development strategy.  Do your due diligence, search the Web, read up on mobile, talk to peers, attend events. The development team at Oracle is working hard on mobile technologies to help customers extend enterprise applications to mobile faster and effectively.  To learn more on what Oracle has to offer, check out the Oracle ADF Mobile (hybrid) and ADF Faces/ADF Mobile browser (Web Mobile) solutions from Oracle.   Additional Information Blog: ADF Blog Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Using RIA DomainServices with ASP.NET and MVC 2

    - by Bobby Diaz
    Recently, I started working on a new ASP.NET MVC 2 project and I wanted to reuse the data access (LINQ to SQL) and business logic methods (WCF RIA Services) that had been developed for a previous project that used Silverlight for the front-end.  I figured that I would be able to instantiate the various DomainService classes from within my controller’s action methods, because after all, the code for those services didn’t look very complicated.  WRONG!  I didn’t realize at first that some of the functionality is handled automatically by the framework when the domain services are hosted as WCF services.  After some initial searching, I came across an invaluable post by Joe McBride, which described how to get RIA Service .svc files to work in an MVC 2 Web Application, and another by Brad Abrams.  Unfortunately, Brad’s solution was for an earlier preview release of RIA Services and no longer works with the version that I am running (PDC Preview). I have not tried the RC version of WCF RIA Services, so I am not sure if any of the issues I am having have been resolved, but I wanted to come up with a way to reuse the shared libraries so I wouldn’t have to write a non-RIA version that basically did the same thing.  The classes I came up with work with the scenarios I have encountered so far, but I wanted to go ahead and post the code in case someone else is having the same trouble I had.  Hopefully this will save you a few headaches! 1. Querying When I first tried to use a DomainService class to perform a query inside one of my controller’s action methods, I got an error stating that “This DomainService has not been initialized.”  To solve this issue, I created an extension method for all DomainServices that creates the required DomainServiceContext and passes it to the service’s Initialize() method.  Here is the code for the extension method; notice that I am creating a sort of mock HttpContext for those cases when the service is running outside of IIS, such as during unit testing!     public static class ServiceExtensions     {         /// <summary>         /// Initializes the domain service by creating a new <see cref="DomainServiceContext"/>         /// and calling the base DomainService.Initialize(DomainServiceContext) method.         /// </summary>         /// <typeparam name="TService">The type of the service.</typeparam>         /// <param name="service">The service.</param>         /// <returns></returns>         public static TService Initialize<TService>(this TService service)             where TService : DomainService         {             var context = CreateDomainServiceContext();             service.Initialize(context);             return service;         }           private static DomainServiceContext CreateDomainServiceContext()         {             var provider = new ServiceProvider(new HttpContextWrapper(GetHttpContext()));             return new DomainServiceContext(provider, DomainOperationType.Query);         }           private static HttpContext GetHttpContext()         {             var context = HttpContext.Current;   #if DEBUG             // create a mock HttpContext to use during unit testing...             if ( context == null )             {                 var writer = new StringWriter();                 var request = new SimpleWorkerRequest("/", "/",                     String.Empty, String.Empty, writer);                   context = new HttpContext(request)                 {                     User = new GenericPrincipal(new GenericIdentity("debug"), null)                 };             } #endif               return context;         }     }   With that in place, I can use it almost as normally as my first attempt, except with a call to Initialize():     public ActionResult Index()     {         var service = new NorthwindService().Initialize();         var customers = service.GetCustomers();           return View(customers);     } 2. Insert / Update / Delete Once I got the records showing up, I was trying to insert new records or update existing data when I ran into the next issue.  I say issue because I wasn’t getting any kind of error, which made it a little difficult to track down.  But once I realized that that the DataContext.SubmitChanges() method gets called automatically at the end of each domain service submit operation, I could start working on a way to mimic the behavior of a hosted domain service.  What I came up with, was a base class called LinqToSqlRepository<T> that basically sits between your implementation and the default LinqToSqlDomainService<T> class.     [EnableClientAccess()]     public class NorthwindService : LinqToSqlRepository<NorthwindDataContext>     {         public IQueryable<Customer> GetCustomers()         {             return this.DataContext.Customers;         }           public void InsertCustomer(Customer customer)         {             this.DataContext.Customers.InsertOnSubmit(customer);         }           public void UpdateCustomer(Customer currentCustomer)         {             this.DataContext.Customers.TryAttach(currentCustomer,                 this.ChangeSet.GetOriginal(currentCustomer));         }           public void DeleteCustomer(Customer customer)         {             this.DataContext.Customers.TryAttach(customer);             this.DataContext.Customers.DeleteOnSubmit(customer);         }     } Notice the new base class name (just change LinqToSqlDomainService to LinqToSqlRepository).  I also added a couple of DataContext (for Table<T>) extension methods called TryAttach that will check to see if the supplied entity is already attached before attempting to attach it, which would cause an error! 3. LinqToSqlRepository<T> Below is the code for the LinqToSqlRepository class.  The comments are pretty self explanatory, but be aware of the [IgnoreOperation] attributes on the generic repository methods, which ensures that they will be ignored by the code generator and not available in the Silverlight client application.     /// <summary>     /// Provides generic repository methods on top of the standard     /// <see cref="LinqToSqlDomainService&lt;TContext&gt;"/> functionality.     /// </summary>     /// <typeparam name="TContext">The type of the context.</typeparam>     public abstract class LinqToSqlRepository<TContext> : LinqToSqlDomainService<TContext>         where TContext : System.Data.Linq.DataContext, new()     {         /// <summary>         /// Retrieves an instance of an entity using it's unique identifier.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="keyValues">The key values.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual TEntity GetById<TEntity>(params object[] keyValues) where TEntity : class         {             var table = this.DataContext.GetTable<TEntity>();             var mapping = this.DataContext.Mapping.GetTable(typeof(TEntity));               var keys = mapping.RowType.IdentityMembers                 .Select((m, i) => m.Name + " = @" + i)                 .ToArray();               return table.Where(String.Join(" && ", keys), keyValues).FirstOrDefault();         }           /// <summary>         /// Creates a new query that can be executed to retrieve a collection         /// of entities from the <see cref="DataContext"/>.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <returns></returns>         [IgnoreOperation]         public virtual IQueryable<TEntity> GetEntityQuery<TEntity>() where TEntity : class         {             return this.DataContext.GetTable<TEntity>();         }           /// <summary>         /// Inserts the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Insert<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.InsertOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Insert);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity) where TEntity : class         {             return this.Update(entity, null);         }           /// <summary>         /// Updates the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <param name="original">The original.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Update<TEntity>(TEntity entity, TEntity original)             where TEntity : class         {             if ( original == null )             {                 original = GetOriginal(entity);             }               var table = this.DataContext.GetTable<TEntity>();             table.TryAttach(entity, original);               return this.Submit(entity, original, DomainOperation.Update);         }           /// <summary>         /// Deletes the specified entity.         /// </summary>         /// <typeparam name="TEntity">The type of the entity.</typeparam>         /// <param name="entity">The entity.</param>         /// <returns></returns>         [IgnoreOperation]         public virtual bool Delete<TEntity>(TEntity entity) where TEntity : class         {             //var table = this.DataContext.GetTable<TEntity>();             //table.TryAttach(entity);             //table.DeleteOnSubmit(entity);               return this.Submit(entity, null, DomainOperation.Delete);         }           protected virtual bool Submit(Object entity, Object original, DomainOperation operation)         {             var entry = new ChangeSetEntry(0, entity, original, operation);             var changes = new ChangeSet(new ChangeSetEntry[] { entry });             return base.Submit(changes);         }           private TEntity GetOriginal<TEntity>(TEntity entity) where TEntity : class         {             var context = CreateDataContext();             var table = context.GetTable<TEntity>();             return table.FirstOrDefault(e => e == entity);         }     } 4. Conclusion So there you have it, a fully functional Repository implementation for your RIA Domain Services that can be consumed by your ASP.NET and MVC applications.  I have uploaded the source code along with unit tests and a sample web application that queries the Customers table from inside a Controller, as well as a Silverlight usage example. As always, I welcome any comments or suggestions on the approach I have taken.  If there is enough interest, I plan on contacting Colin Blair or maybe even the man himself, Brad Abrams, to see if this is something worthy of inclusion in the WCF RIA Services Contrib project.  What do you think? Enjoy!

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • Can static methods be called using object/instance in .NET

    Ans is Yes and No   Yes in C++, Java and VB.NET No in C#   This is only compiler restriction in c#. You might see in some websites that we can break this restriction using reflection and delegates, but we can’t, according to my little research J I shall try to explain you…   Following is code sample to break this rule using reflection, it seems that it is possible to call a static method using an object, p1 using System; namespace T {     class Program     {         static void Main()         {             var p1 = new Person() { Name = "Smith" };             typeof(Person).GetMethod("TestStatMethod").Invoke(p1, new object[] { });                     }         class Person         {             public string Name { get; set; }             public static void TestStatMethod()             {                 Console.WriteLine("Hello");             }         }     } } but I do not think so this method is being called using p1 rather Type Name “Person”. I shall try to prove this… look at another example…  Test2 has been inherited from Test1. Let’s see various scenarios… Scenario1 using System; namespace T {     class Program     {         static void Main()         {             Test1 t = new Test1();            typeof(Test2).GetMethod("Method1").Invoke(t,                                  new object[] { });         }     }     class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }       class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } } Output:   At test1::Method2 Scenario2         static void Main()         {             Test2 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                                          new object[] { });         }   Output:   At test1::Method2   Scenario3         static void Main()         {             Test1 t = new Test2();            typeof(Test2).GetMethod("Method1").Invoke(t,                             new object[] { });         }   Output: At test1::Method2 In all above scenarios output is same, that means, Reflection also not considering the object what you pass to Invoke method in case of static methods. It is always considering the type which you specify in typeof(). So, what is the use passing instance to “Invoke”. Let see below sample using System; namespace T {     class Program     {         static void Main()         {            typeof(Test2).GetMethod("Method1").                Invoke(null, new object[] { });         }     }       class Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method1");         }     }     class Test2 : Test1     {         public static void Method1()         {             Console.WriteLine("At test1::Method2");         }     } }   Output is   At test1::Method2   I was able to call Invoke “Method1” of Test2 without any object.  Yes, there no wonder here as Method1 is static. So we may conclude that static methods cannot be called using instances (only in c#) Why Microsoft has restricted it in C#? Ans: Really there Is no use calling static methods using objects because static methods are stateless. but still Java and C++ latest compilers allow calling static methods using instances. Java sample class Test {      public static void main(String str[])      {            Person p = new Person();            System.out.println(p.GetCount());      } }   class Person {   public static int GetCount()   {      return 100;   } }   Output          100 span.fullpost {display:none;}

    Read the article

  • How I Record Screencasts

    - by Daniel Moth
    I get this asked a lot so here is my brain dump on the topic. What A screencast is just a demo that you present to yourself while recording the screen. As such, my advice for clearing your screen for demo purposes and setting up Visual Studio still applies here (adjusting for the fact I wrote those blog posts when I was running Vista and VS2008, not Windows 8 and VS2012). To see examples of screencasts, watch any of my screencasts on channel9. Why If you are a technical presenter, think of when you get best reactions from a developer audience in your sessions: when you are doing demos, of course. Imagine if you could package those alone and share them with folks to watch over and over? If you have ever gone through a tutorial trying to recreate steps to explore a feature, think how much more helpful it would be if you could watch a video and follow along. Think of how many folks you "touch" with a conference presentation, and how many more you can reach with an online shorter recording of the demo. If you invest so much of your time for the first type of activity, isn't the second type of activity also worth an investment? Fact: If you are able to record a screencast of a demo, you will be much better prepared to deliver it in person. In fact lately I will force myself to make a screencast of any demo I need to present live at an upcoming event. It is also a great backup - if for whatever reason something fails (software, network, etc) during an attempt of a live demo, you can just play the recorded video for the live audience. There are other reasons (e.g. internal sharing of the latest implemented feature) but the context above is the one within which I create most of my screencasts. Software & Hardware I use Camtasia from Tech Smith, version 7.1.1. Microsoft has a variety of options for capturing the screen to video, but I have been using this software for so long now that I have not invested time to explore alternatives… I also use whatever cheapo headset is near me, but sometimes I get some complaints from some folks about the audio so now I try to remember to use "the good headset". I do not use a web camera as I am not a huge fan of PIP. Preparation First you have to know your technology and demo. Once you think you know it, write down the outline and major steps of the demo. Keep it short 5-20 minutes max. I break that rule sometimes but try not to. The longer the video is the more chances that people will not have the patience to sit through it and the larger the download wmv file ends up being. Run your demo a few times, timing yourself each time to ensure that you have the planned timing correct, but also to make sure that you are comfortable with what you are going to demo. Unlike with a live audience, there is no live reaction/feedback to steer you, so it can be a bit unnerving at first. It can also lead you to babble too much, so try extra hard to be succinct when demoing/screencasting on your own. TIP: Before recording, hide your desktop/taskbar clock if it is showing. Recording To record you start the Camtasia Recorder tool Configure the settings thought the menus Capture menu to choose custom size or full screen. I try to use full screen and remember to lower the resolution of your screen to as low as possible, e.g. 1024x768 or 1360x768 or something like that. From the Tools -> Options dialog you can choose to record audio and the volume level. Effects menu I typically leave untouched but you should explore and experiment to your liking, e.g. how the mouse pointer is captured, and whether there should be a delay for the recording when you start it. Once you've configured these settings, typically you just launch this tool and hit the F9 key to start recording. TIP: As you record, if you ever start to "lose your way" hit F9 again to pause recording, regroup your thoughts and flow, and then hit F9 again to resume. Finally, hit F10 to stop recording. At that point the video starts playing for you in the recorder. This is where you can preview the video to see that you are happy with it before saving. If you are happy, hit the Save As menu to choose where you want to save the video.     TIP: If you've really lost your way to the extent where you'll need to do some editing, hit F10 to stop recording, save the video and then record some more - you'll be able to stitch the videos together later and this will make it easier for you to delete the parts where you messed up. TIP: Before you commit to recording the whole demo, every time you should record 5 seconds and preview them to ensure that you are capturing the screen the way you want to and that your audio is still correctly configured and at the right level. Trust me, you do not want to be recording 15 minutes only to find out that you messed up on the configuration somewhere. Editing To edit the video you launch another Camtasia app, the Camtasia Studio. File->New Project. File->Save Project and choose location. File->Import Media and choose the video(s) you saved earlier. These adds them to the area at the top/middle but not at the timeline at the bottom. Right click on the video and choose Add to timeline. It will prompt you for the Editing dimensions and I always choose Recording Dimensions. Do whatever edits you want to do for this video, then add the next video if you have one to stitch and repeat. In terms of edits there are many options. The simplest is to do nothing, which is the option I did when I first starting doing these in 2006. Nowadays, I typically cut out pieces that I don't like and also lower/mute the audio in other areas and also speed up the video in some areas. A full tutorial on how to do this is beyond the scope of this blog post, but your starting point is to select portions on the timeline and then open the Edit menu at the very top (tip: the context menu doesn't have all options). You can spend hours editing a recording, so don’t lose track of time! When you are done editing, save again, and you are now ready to Produce. Producing Production is specific to where you will publish. I've only ever published on channel9, so for that I do the following File -> Produce and share. This opens a wizard dialog In the dropdown choose Custom production settings Hit Next and then choose WMV Hit Next and keep the default of Camtasia Studio Best Quality and File Size (recommended) Hit Next and choose Editing dimensions video size Hit Next, hit Options and you get a dialog. Enter a Title for the project tab and then on the author tab enter the Creator and Homepage. Hit OK Hit Next. Hit Next again. Enter a video file name in the Production name textbox and then hit Finish. Now do other stuff while you wait for the video to be produced and you hear it playing. After the video is produced watch it to ensure it was produced correctly (e.g. sometimes you get mouse issues) and then you are ready for publishing it. Publishing Follow the instructions of the place where you are going to publish. If you are MSFT internal and want to choose channel9 then contact those folks so they can share their instructions (if you don't know who they are ping me and I'll connect you but they are easy to find in the GAL). For me this involves using a tool to point to the video, choosing a file name (again), choosing an image from the video to display when it is not playing, choosing what output formats I want, and then later on a webpage adding tags, adding a description, and adding a title. That’s all folks, have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • CodePlex Daily Summary for Saturday, June 09, 2012

    CodePlex Daily Summary for Saturday, June 09, 2012Popular ReleasesARCBots API Program: ARCBots API Program v3 STABLE: Stable Release, you can find the update notes in the source code.Microsoft SQL Server Product Samples: Database: AdventureWorks Sample Reports 2008 R2: AdventureWorks Sample Reports 2008 R2.zip contains several reports include Sales Reason Comparisons SQL2008R2.rdl which uses Adventure Works DW 2008R2 as a data source reference. For more information, go to Sales Reason Comparisons report.RULI Chain Code Image Generator: RULI Chain Code BW Image Generator v. 0.4: added features: - 3x3 bitmask support - 7x7 bitmask support - app icon added some refactoring for later library-creationThe Chronicles of Asku: Alpha Test v1.1: Welcome to the Chronicles of Asku alpha test. The current state of the game is 2 tomb floors, level 10 cap, and almost all core systems in the game, excluding a store that you buy armor in. This is just a test for downloading the game, and severe bug hunts... INSTRUCTIONS: When you download the folder, right click within the folder and select EXTRACT ALL Run Setup.exe Follow any on screen instructions DO NOT TRY TO LOAD A CHARACTER IF YOU HAVE NEVER SAVED ONE BEFORE Patch 1.1 -Added a...Json.NET: Json.NET 4.5 Release 7: Fix - Fixed Metro build to pass Windows Application Certification Kit on Windows 8 Release Preview Fix - Fixed Metro build error caused by an anonymous type Fix - Fixed ItemConverter not being used when serializing dictionaries Fix - Fixed an incorrect object being passed to the Error event when serializing dictionaries Fix - Fixed decimal properties not being correctly ignored with DefaultValueHandlingLINQ Extensions Library: 1.0.3.0: New to release 1.0.3.0:Combinatronics: Combinations (unique) Combinations (with repetition) Permutations (unique) Permutations (with repetition) Convert jagged arrays to fixed multidimensional arrays Convert fixed multidimensional arrays to jagged arrays ElementAtMax ElementAtMin ElementAtAverage New set of array extension (1.0.2.8):Rotate Flip Resize (maintaing data) Split Fuse Replace Append and Prepend extensions (1.0.2.7) IndexOf extensions (1.0.2.7) Ne...Audio Pitch & Shift: Audio Pitch And Shift 4.5.0: Added Instruments tab for modules Open folder content feature Some bug fixesPython Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...DotNetNuke® Links: 06.02.01: Added new DNN 6.2.0 beta social feature "friends" BugfixesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...LiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...New ProjectsAdvanceWars: Advance Wars For NetARCBots API Program: This is a simple API program designed for quick use for the ARCBots API.AutoUpdaterdotNET: AutoUpdater.NET is a class library that allows .net developers to easily add auto update functionality to their project.C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite contains a set of tests to aid in verifying compiler, library, and runtime behaviors as specified in the C++ AMP open specification. DynamicObjectProxy: DOP (Dynamic Object Proxy) é uma biblioteca que permite que qualquer método de qualquer objeto possa ser interceptado através de um proxy dinâmico. Ao interceptar um método, pode-se decorar o objeto, alterando ou recuperando informações sobre o seu comportamento. Extremamente útil para logs, verificações de transações, etc. DOP (Dynamic Object Proxy) is a library that contains classes that makes it possible that any method of any object can be intercepted by a dynamic proxy. By interceptin...EAIP: ???????? Flatland: Artificial life, or A-Life, is a broad and ever emerging field that has found its applications in almost any field: economics, medicine, traffic planning, shopping habits, patterns in music. And it is evident that work with modelling logical life in artificial environments is a necessity for the future software developer. Abbots novella "Flatland: A Romance of Many Dimensions" describes the two dimensional world "Flatland", where life is geometrical shapes that have human-like emotions, but...GameAudioSystem: Simple Audio Game System written with OpenAL to be used with Ogre3D.HyperionSS2P - Simple scan to pdf solution: Simple scan to pdf solution.Issue Impala: Issue Impala is a powerful and elegant issue tracker.KoekyGL Wrapper: A .net wrapper for OpenGL aimed at making it easy to use OpenGL.Marik Sample Project: This is a sample by KitMongoDB Managment Client: Development MongoDB Web client. HTML 5 jQuery. One page web application. When Windows 8 metro is released then Metro style application To.MongoDB.Dynamic: MongoDB.Dynamic is a personal project that I’ve started when I was developing my first application targeting MongoDB as DBMS, using the “official driver” MongoDB.Driver, supported by 10gen. The objective is to provide a lightweight library with some interesting features that speed up development for desktop/web applications that accesses MongoDB databases. MongoDB.Dynamic is oriented to interfaces. You don’t need to create concrete classes of your entities, all you need to do is setup your...My Google Workspace: An easy way to create documents search at Google and read your emails and Much moreqzgfjava: git????,?android??“??”???SP Sticky Notes: SP Sticky Notes allows your users to add sticky notes to a page on your SharePoint site.Weather3: It's Metro Style AppXNA Scumm: This is a rewrite of the ScummVM engine using XNA. ScummVM is an engine that runs old school LucasArts graphical adventure games. It is written completely in C# and the first version will run on PC and Xbox 360. A Windows phone version will probably follow. Of course, you will need to own the original games in order to use it. I will start my work by The Secret of Monkey Island, more specifically the VGA CD version. Monkey Island 2: LeChuck's Revenge and Indiana Jones 4 and Fate of Atl...YoG Community Game: This project is a game, being developed by several members of the Yogscast Community Forums. It is a top-down shooter, based around the protagonist 'Joe', and his adventures through TV shows every day.

    Read the article

  • unexpected EOF and end of document

    - by WASasquatch
    I have been fiddling with this code for a few days. Mind you I am a beginner. I just want to get my script to be able to download a remote file, and scan MineCraft plugins. I got the scan plugins to work, but I'm having two other issues. One, I can't get the mc_addplugin to work correctly, and I get a Unexpected EOF and unexpected end of document when running any other command besides mc_scanplugins or mc_start bash: -c: line 0: unexpected EOF while looking for matching `"' bash: -c: line 1: syntax error: unexpected end of file Help would be so much appreciated! Thanks in advance. #!/bin/bash # /etc/init.d/craftbukkit # version 0.9.1 2012-07-06 (YYYY-MM-DD) ### BEGIN INIT INFO # Provides: craftbukkit # Required-Start: $local_fs $remote_fs # Required-Stop: $local_fs $remote_fs # Should-Start: $network # Should-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts craftbukkit server # Description: Starts and controls the craftbukkit server ### END INIT INFO # SETTINGS SERVICE='craftbukkit-1.2.5-R1.0.jar' OPTIONS='nogui' USERNAME='smith' # LIST ALL THE WORLDS IN YOUR CRAFTBUKKIT SERVER FOLDER WORLDS[1]='world' WORLDS[2]='world_nether' WORLDS[3]='world_the_end' WORLDS[4]='flat_world' MCPATH='/var/www/servers/Foundation' PLUGINSPATH='/var/www/servers/Foundation/plugins' TEMPPLUGINS='/var/www/servers/Foundationplugins/temp_plugins' BACKUPPATH='/var/www/servers/Foundation/backup' CPU_COUNT=2 INVOCATION="java -Xmx2024M -Xms2024M -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalPacing -XX:ParallelGCThreads=$CPU_COUNT -XX:+AggressiveOpts -jar $SERVICE $OPTIONS" ME=`whoami` as_user() { if [ $ME == $USERNAME ] ; then bash -c "$1" else su - $USERNAME -c "$1" fi } mc_start() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is already running!" else echo "Starting $SERVICE..." cd $MCPATH as_user "cd $MCPATH && screen -dmS craftbukkit $INVOCATION" sleep 7 if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is now running." else echo "Error! Could not start $SERVICE!" fi fi } mc_saveoff() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... suspending saves" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say The server is preforming a backup. Server going to read-only mode. Do not build...\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-off\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sync sleep 10 else echo "$SERVICE is not running. Not suspending saves." fi } mc_save() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... Saving worlds..." as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sync sleep 10 echo "Save complete!" else echo "$SERVICE is not running. Cannot save!" fi } mc_saveon() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running... re-enabling saves" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-on\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say Server backup has completed. Server going to read-write mode. You can now continue building...\"\015'" else echo "$SERVICE is not running. Not resuming saves." fi } mc_stop() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "Stopping $SERVICE" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"say $SERVERNAME is shutting down in 30 seconds! Please stop what you are doing. Check back later, we'll be back!\"\015'" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"save-all\"\015'" sleep 30 as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"stop\"\015'" sleep 7 else echo "$SERVICE was not running." fi if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "Error! $SERVICE could not be stopped." else echo "$SERVICE is stopped." fi } mc_update() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Will not start update." else MC_SERVER_URL=http://dl.bukkit.org/latest-rb/craftbukkit.jar as_user "cd $MCPATH && wget -q -O $MCPATH/craftbukkit_server.jar.update $MC_SERVER_URL" if [ -f $MCPATH/craftbukkit_server.jar.update ] then if `diff $MCPATH/$SERVICE $MCPATH/craftbukkit_server.jar.update >/dev/null` then echo "You are already running the latest version of $SERVICE. Update anyway? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $MCPATH/$SERVICE $MCPATH/${SERVICE}_old.jar" as_user "mv $MCPATH/craftbukkit_server.jar.update $MCPATH/$SERVICE" echo "$SERVICE updated successfully!"; break;; No ) echo "The update was not installed! Removing temporary files and exiting..." as_user "rm $MCPATH/craftbukkit_server.jar.update" exit;; esac done else as_user "mv $MCPATH/$SERVICE $MCPATH/${SERVICE}_old.jar" as_user "mv $MCPATH/craftbukkit_server.jar.update $MCPATH/$SERVICE" echo "$SERVICE updated successfully!" fi else echo "$SERVICE update could not be downloaded." fi fi } mc_addplugin() { if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running! Please stop the service before adding a plugin." else echo "Paste the URL to the .JAR Plugin..." read JARURL JARNAME=$(basename "$JARURL") if [ -d "$TEMPPLUGINS" ] then as_user "cd $PLUGINSPATH && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" else as_user "cd $PLUGINSPATH && mkdir $TEMPPLUGINS && wget -r -A.jar $JARURL -o temp_plugins/$JARNAME" fi if [ -f "$TMPDIR/$JARNAME" ] then if [ -f "$PLUGINSPATH/$JARNAME" ] then if `diff $PLUGINSPATH/$JARNAME $TMPDIR/$JARNAME >/dev/null` then echo "You are already running the latest version of $JARNAME." else NOW=`date "+%Y-%m-%d_%Hh%M"` echo "Are you sure you want to overwrite this plugin? [Y/n]" echo "Note: Your old plugin will be moved to the "$TEMPPLUGINS" folder with todays date." select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Are you sure you want to add this new plugin? [Y/n]" select yn in "Yes" "No"; do case $yn in Yes ) as_user "mv $PLUGINSPATH/$JARNAME $TEMPPLUGINS/${JARNAME}_${NOW} && mv $TEMPPLUGINS/$JARNAME $PLUGINSPATH/$JARNAME"; break;; No ) echo "The plugin has not been installed! Removing temporary plugin and exiting..." as_user "rm $TEMPPLUGINS/$JARNAME"; exit;; esac done echo "Would you like to start the $SERVICE now? [Y/n]?" select yn in "Yes" "No"; do case $yn in Yes ) mc_start; break;; No ) "$SERVICE not running! To start the service run: /etc/init.d/craftbukkit start"; exit;; esac done fi else echo "Failed to download the plugin from the URL you specified!" exit; fi fi } mc_scanplugins() { if [ "$(ls -A $PLUGINSPATH)" ] then shopt -s nullglob PLUGINS=($PLUGINSPATH/*.jar) i=1 for f in "${PLUGINS[@]}" do echo "${i}: $f" PLUGIN[$i]=$f i=$(( $i + 1 )) done echo "Enter the ID of a plugin you want removed, or any other key to cancel." read INPUT if [ ! -z "${INPUT##*[!0-9]*}" ] then if [ -f "${PLUGIN[INPUT]}" ] then echo "Removing plugin..." JAR=$(basename ${PLUGIN[INPUT]}) JARNAME=${JAR%.jar} as_user "rm -f ${PLUGIN[INPUT]}" sleep 2 as_user "cd $PLUGINSPATH && rm -rf ./${JARNAME}" if [ -f "${PLUGINSPATH}/${JARNAME}" ] then echo "Plugin folder could not be removed..." fi echo "Plugin removed." else echo "${PLUGIN[INPUT]}" echo "Invalid plugin! Does not exist! Canceling..." exit; fi else echo "Canceling..." exit; fi else echo "You have no plugins installed." exit; fi } mc_backup() { mc_saveoff for i in "${WORLDS[@]}"; do NOW=`date "+%Y-%m-%d_%Hh%M"` BACKUP_FILE="$BACKUPPATH/${i}_${NOW}.tar" echo "Backing up world: $i..." #as_user "cd $MCPATH && cp -r $i $BACKUPPATH/${i}_`date "+%Y.%m.%d_%H.%M""` as_user "tar -C \"$MCPATH\" -cf \"$BACKUP_FILE\" $i" done echo "Backing up $SERVICE" as_user "tar -C \"$MCPATH\" -rf \"$BACKUP_FILE\" $SERVICE" #as_user "cp \"$MCPATH/$SERVICE\" \"$BACKUPPATH/craftbukkit_server_${NOW}.jar\"" mc_saveon echo "Compressing backup..." as_user "tar -cvzf $BACKUPPATH/server_backup_${NOW}.tar.gz $MCPATH" echo "Backup has completed successfully." } mc_command() { command="$1"; if pgrep -u $USERNAME -f $SERVICE > /dev/null then pre_log_len=`wc -l "$MCPATH/server.log" | awk '{print $1}'` echo "$SERVICE is running... executing command" as_user "screen -p 0 -S craftbukkit -X eval 'stuff \"$command\"\015'" sleep .1 # assumes that the command will run and print to the log file in less than .1 seconds # print output tail -n $[`wc -l "$MCPATH/server.log" | awk '{print $1}'`-$pre_log_len] "$MCPATH/server.log" fi } #Start-Stop here case "$1" in start) mc_start ;; stop) mc_stop ;; restart) mc_stop mc_start ;; save) mc_save ;; update) mc_stop mc_backup mc_update mc_start ;; scanplugins) mc_scanplugins ;; addplugin) mc_addplugin ;; backup) mc_backup ;; status) if pgrep -u $USERNAME -f $SERVICE > /dev/null then echo "$SERVICE is running." else echo "$SERVICE is not running." fi ;; command) if [ $# -gt 1 ]; then shift mc_command "$*" else echo "Must specify server command (try 'help'?)" fi ;; *) echo "Usage: $0 {start|stop|update|backup|status|restart|command \"server command\"}" exit 1 ;; esac exit 0

    Read the article

  • CodePlex Daily Summary for Wednesday, June 05, 2013

    CodePlex Daily Summary for Wednesday, June 05, 2013Popular ReleasesQlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.Web Pages CMS: 0.5: First public releaseHarvester - Debug Viewer for Trace, NLog & Log4Net: v2.0.1 (.NET 4.0): Minor Updates Fixed incorrect process naming being displayed if process ID reassigned before cache invalidated. Fixed incorrect event type/source for TraceListener.TraceData methods. Updated NLog package references. Official Documentation Moved to GitHub http://cbaxter.github.com/Harvester Official Source Moved to GitHub https://github.com/cbaxter/HarvesterSuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...Impulse Media Player: Impulse Media Player 3.3.1.0: EchoNest Analyzer introduced Similar track feature (social tab) Last played tracks can be removed permanently pitch / tempo bar can be hiddenBlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...Nearforums - ASP.NET MVC forum engine: Nearforums v9.0: Version 9.0 of Nearforums with great new features for users and developers: SQL Azure support Admin UI for Forum Categories Avoid html validation for certain roles Improve profile picture moderation and support Warn, suspend, and ban users Web administration of site settings Extensions support Visit the Roadmap for more details. Webdeploy package sha1 checksum: 9.0.0.0: e687ee0438cd2b1df1d3e95ecb9d66e7c538293b eReading: eReading: ????,??CPU???????。 ??????????。Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.93: Added -esc:BOOL switch (CodeSettings.AlwaysEscapeNonAscii property) to always force non-ASCII character (ch > 0x7f) to be escaped as the JavaScript \uXXXX sequence. This switch should be used if creating a Symbol Map and outputting the result to the a text encoding other than UTF-8 or UTF-16 (ASCII, for instance). Fixed a bug where a complex comma operation is the operand of a return statement, and it was looking at the wrong variable for possible optimization of = to just .VG-Ripper & PG-Ripper: VG-Ripper 2.9.42: changes NEW: Added Support for "GatASexyCity.com" links NEW: Added Support for "ImgCloud.co" links NEW: Added Support for "ImGirl.info" links NEW: Added Support for "SexyImg.com" links FIXED: "ImageBam.com" linksDocument.Editor: 2013.22: What's new for Document.Editor 2013.22: Improved Bullet List support Improved Number List support Minor Bug Fix's, improvements and speed upsCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130528): Features include a content management system and a robust featured blogging engine. This includes configurable date based blog post URLs, blog post content association with categories and tags, assignment/customization of category and tag URL patterns, simple blog post feedback collection and review, blog post pagination/indexes, designation of default blog page (required to make search, category links, or tag links function), URL date formatting patterns, RSS feed support for posts and pages...PHPExcel: PHPExcel 1.7.9: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated.Droid Explorer: Droid Explorer 0.8.8.10 Beta: Fixed issue with some people having a folder called "android-4.2.2" in their build-tools path. - 16223 patterns & practices: Data Access Guidance: Data Access Guidance Drop3 2013.05.31: Drop 3DotNet.Highcharts: DotNet.Highcharts 2.0 with Examples: DotNet.Highcharts 2.0 Tested and adapted to the latest version of Highcharts 3.0.1 Added new chart types: Arearange, Areasplinerange, Columnrange, Gauge, Boxplot, Waterfall, Funnel and Bubble Added new type PercentageOrPixel which represents value of number or number with percentage. Used for sizes, width, height, length, etc. Removed inheritances in YAxis option classes. Closed issues: 682: Missing property - XAxisPlotLinesLabel.Text 688: backgroundColor and plotBackgroundColor are...Umbraco CMS: Umbraco 6.1.1: Source codeLooking for the source code? We're not uploading that as a zip file any more because you can already get it from CodePlex, click this link and hit the "Download" link. BlogRead the release blog post for 6.1.0. Read the release blog post for 6.1.1. Getting Started Read the installation documentation: http://our.umbraco.org/documentation/Installation/ Check the free foundation videos on how to get started building Umbraco sites. They're available from: Introduction for webmasters:...Composite C1 CMS - Open Source on .NET: Composite C1 4.0 (release candidate): Composite C1 4.0 (4.0.4897.31550) (release candidate) Write a review for this release Getting started If you are new to Composite C1 and want to install it: http://docs.composite.net/Getting-started What's new in Composite C1 4.0 The following are highlights of major changes since Composite C1 3.2: General user features: Uploads up to 512MB accepted in the media archive New “Block Selector” in Visual Editor – enable users to create styled div, blockquote etc. elements (not yet availabl...New ProjectsApiDoc: ApiDoc is a library for creating your own API documentation similar to the MSDN directly from your assembly and /// Xml comments without source code.Associativy Internal Link Graph Builder: Orchard module for automatically creating Associativy graphs (http://associativy.com/) from internal links.Azure Business App Scale Proof of Concepts: This is actually a series of proof of concept demo applications built to demonstrate scale of particular architecture or application designs. Badr: .Net Web Framework: Simple, Database-driven, Multiplatform, .Net web frameworkCalcolo di Integrali con approssimazioni: Integrali Metodo dei Trapezi Metodo dei Rettangoli Metodo delle Parabole Metodo di Montecarlo Integralsconfiguration: a full function configuration system based on .netCron Expression Descriptor: "Translate" a Cron Expression in a human readable format. Support databinding, and creation of the expression and Quartz.NET jobs schedulerDaphne Web Edition: The Daphne Web Edition of software for professional checkers players running in a browser.Entity framework T4 NHibernate mapping generator: This project contains T4 templates to create POCOs + NHibernate mappings from an Entity Framework Model (.edmx).EWS Streaming Notification Sample Application: Sample application showing how to handle multiple subscriptions using streaming notifications, specifically for Exchange 2013 (or Wave 15 of Office 365).EXACT_EXTENSION: This is project to support Account System in International SchoolsIPS Training: Playing around with Joe to teach some programming.jean0604wordpressmercurial: dfdafdaLenic.DI: Lenic.DI -- Another IOC Container Library Using DelegateManage Azure Srevice: This project is a windows form application to manage azure service and deployments.Microsoft dot Net Lab: This project concentrates into a Lab a large web project based on Microsoft Best Practices and info on “what works” and more what should be avoided in Prod env.Miris Human Milk Analyser: This is a DEMO project ONLYMultilingual call each other: Multilingual call each othernBlade: A Dependency Injection Container.nChart: A JavaScript Chart Library Base on D3.jsnCMS: A Content Management System.nReport: A JavaScript Report Library.nTemplate: A Template Engine.searchLocal: search localT4 Unit Test Constructor: T4 Unit Test Constructor is a Text Transform file that generates complete Unit Testing project based on siblings projects inside a solution.v3r137: m0l3cUL4r dyN4MiC2 5iMul47i0N u5IN9 V3Rl37 iN739r47I0N. vi5U4li23 p4R7ICL32 8y 0P3n9l P0iN7 5prI73. U23 0P3NMp 4nd 0p3NCl f0r 5p33D.Virtual Sport for Sport Team: This is the website to manage a sports team. Through this website you can manage the members of the team, from players to staff, schedules, and more. and for suWindows Azure MultiSite Role: This web role allows you to host multiple websites on the same VM instances, and synchronising the files automatically.WuvOverlay: An overlay for the game Guild Wars 2 to display useful information obtained from the public API.XYZ: XYZ

    Read the article

  • WPF TabControl - how to preserve control state within tab items (MVVM pattern)

    - by Tim Coulter
    I am a newcomer to WPF, attempting to build a project that follows the recommendations of Josh Smith's excellent article describing The Model-View-ViewModel Design Pattern. Using Josh's sample code as a base, I have created a simple application that contains a number of "workspaces", each represented by a tab in a TabControl. In my application, a workspace is a document editor that allows a hierarchical document to be manipulated via a TreeView control. Although I have succeeded in opening multiple workspaces and viewing their document content in the bound TreeView control, I find that the TreeView "forgets" its state when switching between tabs. For example, if the TreeView in Tab1 is partially expanded, it will be shown as fully collapsed after switching to Tab2 and returning to Tab1. This behaviour appears to apply to all aspects of control state for all controls. After some experimentation, I have realized that I can preserve state within a TabItem by explicitly binding each control state property to a dedicated property on the underlying ViewModel. However, this seems like a lot of additional work, when I simply want all my controls to remember their state when switching between workspaces. I assume I am missing something simple, but I am not sure where to look for the answer. Any guidance would be much appreciated. Thanks, Tim Update: As requested, I will attempt to post some code that demonstrates this problem. However, since the data that underlies the TreeView is complex, I will post a simplified example that exhibits the same symtoms. Here is the XAML from the main window: <TabControl IsSynchronizedWithCurrentItem="True" ItemsSource="{Binding Path=Docs}"> <TabControl.ItemTemplate> <DataTemplate> <ContentPresenter Content="{Binding Path=Name}" /> </DataTemplate> </TabControl.ItemTemplate> <TabControl.ContentTemplate> <DataTemplate> <view:DocumentView /> </DataTemplate> </TabControl.ContentTemplate> </TabControl> The above XAML correctly binds to an ObservableCollection of DocumentViewModel, whereby each member is presented via a DocumentView. For the simplicity of this example, I have removed the TreeView (mentioned above) from the DocumentView and replaced it with a TabControl containing 3 fixed tabs: <TabControl> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> In this scenario, there is no binding between the DocumentView and the DocumentViewModel. When the code is run, the inner TabControl is unable to remember its selection when the outer TabControl is switched. However, if I explicitly bind the inner TabControl's SelectedIndex property ... <TabControl SelectedIndex="{Binding Path=SelectedDocumentIndex}"> <TabItem Header="A" /> <TabItem Header="B" /> <TabItem Header="C" /> </TabControl> ... to a corresponding dummy property on the DocumentViewModel ... public int SelecteDocumentIndex { get; set; } ... the inner tab is able to remember its selection. I understand that I can effectively solve my problem by applying this technique to every visual property of every control, but I am hoping there is a more elegant solution.

    Read the article

  • How to read from path in wpf comboboxitem and write into path of binding

    - by Chrik
    Hi, I tried to make up an example to show my problem. My combobox has a list of objects as itemssource. In my example it's a list of Persons. In the combobox i want to show the first name and the last name of the person. But i want to save the last name of the person in the "owner" property of the house-object. My guess was that i bind the SelectedValue to my property and the SelectedValuePath to the name of the property in the comboboxitem. I already googled and tried a view other versions but nothing worked. If i use SelectedItem instead of SelectedValue with the same binding at least the value of the "tostring" function get's written in the property. Sadly that solution doesn't fit in the Rest of my Program because i don't want to override "ToString". The Xaml: <Window x:Class="MultiColumnCombobox.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300" x:Name="window"> <Grid> <ComboBox Height="23" Margin="72,12,86,0" Name="ComboBox1" VerticalAlignment="Top" SelectedValue="{Binding CurrentHouse.Owner, ElementName=window, Mode=TwoWay}" SelectedValuePath="LastName" ItemsSource="{Binding PersonList, ElementName=window, Mode=Default}"> <ComboBox.ItemTemplate> <DataTemplate> <WrapPanel Orientation="Horizontal"> <TextBlock Text="{Binding Path=FirstName}" Padding="10,0,0,0" /> <TextBlock Text="{Binding Path=LastName}" Padding="10,0,0,0" /> </WrapPanel> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> <Button Height="23" Click="PrintButton_Click" HorizontalAlignment="Left" Margin="12,0,0,9" Name="PrintButton" VerticalAlignment="Bottom" Width="75">Print</Button> </Grid> The C# using System.Collections.Generic; using System.Windows; using System; namespace MultiColumnCombobox { public partial class Window1 : Window { private List _PersonList = new List(); public List PersonList { get { return _PersonList; } set { _PersonList = value; } } private House _CurrentHouse = new House { Owner = "Green", Number = "11" }; public House CurrentHouse { get { return _CurrentHouse; } } public Window1() { InitializeComponent(); PersonList.Add(new Person {FirstName = "Peter", LastName = "Smith"}); PersonList.Add(new Person {FirstName = "John", LastName = "Meyer"}); PersonList.Add(new Person {FirstName = "Fritz", LastName = "Green"}); } private void PrintButton_Click(object sender, RoutedEventArgs e) { MessageBox.Show(CurrentHouse.Owner + ":" + CurrentHouse.Number); } } public class House { public string Owner { get; set; } public string Number { get; set; } } public class Person { public string FirstName { get; set; } public string LastName { get; set; } } } Maybe someone has an idea, Christian

    Read the article

  • RSpec test failing looking for a new set of eyes

    - by TheDelChop
    Guys, Here my issuse: I've got two models: class User < ActiveRecord::Base # Setup accessible (or protected) attributes for your model attr_accessible :email, :username has_many :tasks end class Task < ActiveRecord::Base belongs_to :user end with this simple routes.rb file TestProj::Application.routes.draw do |map| resources :users do resources :tasks end end this schema: ActiveRecord::Schema.define(:version => 20100525021007) do create_table "tasks", :force => true do |t| t.string "name" t.integer "estimated_time" t.datetime "created_at" t.datetime "updated_at" t.integer "user_id" end create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "password_confirmation" t.datetime "created_at" t.datetime "updated_at" t.string "username" end add_index "users", ["email"], :name => "index_users_on_email", :unique => true add_index "users", ["username"], :name => "index_users_on_username", :unique => true end and this controller for my tasks: class TasksController < ApplicationController before_filter :load_user def new @task = @user.tasks.new end private def load_user @user = User.find(params[:user_id]) end end Finally here is my test: require 'spec_helper' describe TasksController do before(:each) do @user = Factory(:user) @task = Factory(:task) end #GET New describe "GET New" do before(:each) do User.stub!(:find).with(@user.id.to_s).and_return(@user) @user.stub_chain(:tasks, :new).and_return(@task) end it "should return a new Task" do @user.tasks.should_receive(:new).and_return(@task) get :new, :user_id => @user.id end end end This test fails with the following output: 1) TasksController GET New should return a new Task Failure/Error: get :new, :user_id => @user.id undefined method `abstract_class?' for Object:Class # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:1234:in `class_of_active_record_descendant' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:900:in `base_class' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:655:in `reset_table_name' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:647:in `table_name' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:932:in `arel_table' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:927:in `unscoped' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/named_scope.rb:30:in `scoped' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activerecord/lib/active_record/base.rb:405:in `find' # ./app/controllers/tasks_controller.rb:15:in `load_user' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:431:in `_run__1954900289__process_action__943997142__callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:405:in `send' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:405:in `_run_process_action_callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:88:in `send' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/callbacks.rb:88:in `run_callbacks' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/callbacks.rb:17:in `process_action' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/metal/rescue.rb:8:in `process_action' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/base.rb:113:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/abstract_controller/rendering.rb:39:in `sass_old_process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/gems/haml-3.0.0.beta.3/lib/sass/plugin/rails.rb:26:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/metal/testing.rb:12:in `process_with_new_base_test' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/test_case.rb:390:in `process' # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/actionpack/lib/action_controller/test_case.rb:328:in `get' # ./spec/controllers/tasks_controller_spec.rb:20 # /home/chopper/.rvm/gems/ruby-1.8.7-p249@rails3/bundler/gems/rails-16a5e918a06649ffac24fd5873b875daf66212ad-master/activesupport/lib/active_support/dependencies.rb:209:in `inject' Can anybody help me understand what's going on here? It seems to be an RSpec problem since the controller action actually works, but I could be wrong. Thanks, Joe

    Read the article

  • Mule xml to soap problem

    - by kevfuzz
    Hi, I'm not sure if there many Mule users on here but I'm hoping someone might be able to help me! I'm having a problem calling a webservice from Mule using Axis. I've created a fairly simple example where I have xml in a file being read by Mule, it's then transformed into a Document and sent to the webservice. The relevant code in the mule config looks like this: <inbound> <file:inbound-endpoint path="./files/initial" transformer-refs="FileToString xmlToDom" connector-ref="fileConnector" /> </inbound> <outbound> <pass-through-router> <axis:outbound-endpoint address="http://localhost:8081/holidayService?method=echoXXXX" synchronous="true" style="DOCUMENT" use="LITERAL" /> </pass-through-router> </outbound> However the call for the webservice fails as the above config is generating a SOAP message with an tag just after the tag and closes it just before the tag. The generated SOAP message looks like this: POST /holidayService?method=echoXXXX HTTP/1.1 Content-Type: text/xml X-MULE_ENDPOINT: http://localhost:8081/holidayService?method=echoXXXX SOAPAction: http://localhost:8081/holidayService?method=echoXXXX directory: D:\bea\weblogic92\samples\domains\wl_server\files\processed filename: HolidayRequest.xml method: echoXXXX originalFilename: HolidayRequest.xml style: document use: literal User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:8081 Content-Length: 1183 <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Header> <mule:header soapenv:actor="http://www.muleumo.org/providers/soap/1.0" soapenv:mustUnderstand="0" xmlns:mule="http://www.muleumo.org/providers/soap/1.0"> <mule:MULE_CORRELATION_ID>D:\bea\weblogic92\samples\domains\wl_server\files\processed\HolidayRequest.xml</mule:MULE_CORRELATION_ID> <mule:MULE_CORRELATION_GROUP_SIZE>-1</mule:MULE_CORRELATION_GROUP_SIZE> <mule:MULE_CORRELATION_SEQUENCE>-1</mule:MULE_CORRELATION_SEQUENCE> </mule:header> </soapenv:Header> <soapenv:Body> <value0 xsi:type="ns1:DocumentImpl" xmlns="" xmlns:ns1="http://dom.internal.xerces.apache.org.sun.com"> <sch:HolidayRequest xmlns:sch="http://mycompany.com/hr/schemas"> <sch:Holiday> <sch:StartDate>2009-08-13</sch:StartDate> <sch:EndDate>1988-12-12</sch:EndDate> </sch:Holiday> <sch:Employee> <sch:Number>3434</sch:Number> <sch:FirstName>John</sch:FirstName> <sch:LastName>Smith</sch:LastName> </sch:Employee> </sch:HolidayRequest> </value0> </soapenv:Body> </soapenv:Envelope> The webservice works fine in SOAPUI without the tag and from what I've read on the Mule website I don't know why it's being inserted. Any help would be much appreciated, Kevin.

    Read the article

  • RegEx to ignore / skip everything in html tags

    - by Scott Sumpter
    Looking for a way to combine two Regular Expressions. One to catch the urls and the other to ensure is skips text within html tags. See sample text below functions. Need to pass a block of news text and format text by wrapping urls and email addresses in html tags so users don't have to. The below code works great until there are already html tags within the text. In that case it doubles the html tags. There are plenty of examples to strip html, but I want to just ignore it since the url is already linkified. Also - if there is an easier was to accomplish this, with or without Regex, please let me know. none of my attempts to combine Regexs have worked. coding in ASP.NET VB but will take any workable example/direction. Thanks! ===== Functions ============= Public Shared Function InsertHyperlinks(ByVal inText As String) As String Dim strBuf As String Dim objMatches As Object Dim iStart, iEnd As Integer strBuf = "" iStart = 1 iEnd = 1 Dim strRegUrlEmail As String = "\b(www|http|\S+@)\S+\b" 'RegEx to find urls and email addresses Dim objRegExp As New Regex(strRegUrlEmail, RegexOptions.IgnoreCase) 'Match URLs and emails Dim MatchList As MatchCollection = objRegExp.Matches(inText) If MatchList.Count <> 0 Then objMatches = objRegExp.Matches(inText) For Each Match In MatchList iEnd = Match.Index strBuf = strBuf & Mid(inText, iStart, iEnd - iStart + 1) If InStr(1, Match.Value, "@") Then strBuf = strBuf & HrefGet(Match.Value, "EMAIL", "_BLANK") Else strBuf = strBuf & HrefGet(Match.Value, "WEB", "_BLANK") End If iStart = iEnd + Match.Length + 1 Next strBuf = strBuf & Mid(inText, iStart) InsertHyperlinks = strBuf Else 'No hyperlinks to replace InsertHyperlinks = inText End If End Function Shared Function HrefGet(ByVal url As String, ByVal urlType As String, ByVal Target As String) As String Dim strBuf As String strBuf = "<a href=""" If UCase(urlType) = "WEB" Then If LCase(Left(url, 3)) = "www" Then strBuf = "<a href=""http://" & url & """ Target=""" & _ Target & """>" & url & "</a>" Else strBuf = "<a href=""" & url & """ Target=""" & _ Target & """>" & url & "</a>" End If ElseIf UCase(urlType) = "EMAIL" Then strBuf = "<a href=""mailto:" & url & """ Target=""" & _ Target & """>" & url & "</a>" End If HrefGet = strBuf End Function ===== Sample Text ============= This would be the inText parameter. Midway through the ride, we see a Skip this too. But sometimes we go here [insert normal www dot link dot com]. If you'd like to join us contact Bill Smith at [email protected]. Thanks! sorry stack overflow won't allow multiple hyperlinks to be added. ===== End Sample Text =============

    Read the article

  • Dynamically specify the type in C#

    - by Lirik
    I'm creating a custom DataSet and I'm under some constrains: I want the user to specify the type of the data which they want to store. I want to reduce type-casting because I think it will be VERY expensive. I will use the data VERY frequently in my application. I don't know what type of data will be stored in the DataSet, so my initial idea was to make it a List of objects, but I suspect that the frequent use of the data and the need to type-cast will be very expensive. The basic idea is this: class DataSet : IDataSet { private Dictionary<string, List<Object>> _data; /// <summary> /// Constructs the data set given the user-specified labels. /// </summary> /// <param name="labels"> /// The labels of each column in the data set. /// </param> public DataSet(List<string> labels) { _data = new Dictionary<string, List<object>>(); foreach (string label in labels) { _data.Add(label, new List<object>()); } } #region IDataSet Members public List<string> DataLabels { get { return _data.Keys.ToList(); } } public int Count { get { _data[_data.Keys[0]].Count; } } public List<object> GetValues(string label) { return _data[label]; } public object GetValue(string label, int index) { return _data[label][index]; } public void InsertValue(string label, object value) { _data[label].Insert(0, value); } public void AddValue(string label, object value) { _data[label].Add(value); } #endregion } A concrete example where the DataSet will be used is to store data obtained from a CSV file where the first column contains the labels. When the data is being loaded from the CSV file I'd like to specify the type rather than casting to object. The data could contain columns such as dates, numbers, strings, etc. Here is what it could look like: "Date","Song","Rating","AvgRating","User" "02/03/2010","Code Monkey",4.6,4.1,"joe" "05/27/2009","Code Monkey",1.2,4.5,"jill" The data will be used in a Machine Learning/Artificial Intelligence algorithm, so it is essential that I make the reading of data very fast. I want to eliminate type-casting as much as possible, since I can't afford to cast from 'object' to whatever data type is needed on every read. I've seen applications that allow the user to pick the specific data type for each item in the csv file, so I'm trying to make a similar solution where a different type can be specified for each column. I want to create a generic solution so I don't have to return a List<object> but a List<DateTime> (if it's a DateTime column) or List<double> (if it's a column of doubles). Is there any way that this can be achieved? Perhaps my approach is wrong, is there a better approach to this problem?

    Read the article

  • Mouseover triggered on absolute positioned div

    - by Tauren
    Objective Have a small magnifying glass icon that appears in the top right corner of a table cell when the table cell is hovered over. Mousing over the magnifying glass icon and clicking it will open a dialog window to show detailed information about the item in that particular table cell. I want to reuse the same icon for hundreds of table cells without recreating it each time. Partial Solution Have a single <span> that is absolutely positioned and hidden. When a _previewable table cell is hovered, the <span> is moved to the correct location and shown. This <span> is also moved in the DOM to be a child of the _previewable table cell. This enables a click handler attached to the <span> to find the _previewable parent, and get information from it's jquery data() object that is used to populate the contents of the dialog. Here is a very simplified version of my HTML: <body> <span id="options"> <a class="ui-state-default ui-corner-all"> <span class="ui-icon ui-icon-search"></span> Preview </a> </span> <table> <tr> <td class="_previewable"> <img scr="user_1.png"/> <span>Bob Smith</span> </td> </tr> </table> </body> And this CSS: #options { position: absolute; display: none; } With this jQuery code: var $options = $('#options'); $options.click(function() { $item = $(this).parents("._previewable"); // Show popup based on data in $item.data("id"); Layout.renderPopup($item.data("id"),$item.data("popup")); }); $('._previewable').live('mouseover mouseout',function(event) { if (event.type == 'mouseover') { var $target = $(this); var $parent = $target.offsetParent()[0]; var left = $parent.scrollLeft + $target.position().left + $target.outerWidth() - $options.outerWidth() + 1; var top = $parent.scrollTop + $target.position().top + 2; $options.appendTo($target); $options.css({ "left": left + "px", "top": top + "px" }).show(); } else { // On mouseout, $options continues to be a child of $(this) $options.hide(); } }); Problem This solution works perfectly until the contents of my table are reloaded or changed via AJAX. Because the <span> was moved from the <body> to be a child of the cell, it gets thrown out and replaced during the AJAX call. So my first thought is to move the <span> back to the body on mouseout of the table cell, like this: else { // On mouseout, $options is moved back to be a child of body $options.appendTo("body"); $options.hide(); } However, with this, the <span> disappears as soon as it is mouseover. The mouseout event seems to be called on _previewable when the mouse moves into the <span>, even though the <span> is a child of _previewable and fully displayed within the boundaries of the _previewable table cell. At this point, I've only tested this in Chrome. Questions Why would mouseout be called on _previewable, when the mouse is still within the boundaries of _previewable? Is it because the <span> is absolutely positioned? How can I make this work, without recreating the <span> and it's click handler on each AJAX table referesh?

    Read the article

  • innter.HTML not working after submit button is clicked

    - by user1781453
    I am trying to get the innerHTML to change to what is in the end of the function "calculate" but nothing happens once I hit submit. Here is my code: Pizza Order Form .outp {border-style:solid;background-color:white; border-color:red;padding:1em; border-width: .5em;} .notes {font-size:smaller;font-style:italic;} p {margin-left: 15%; width: 65%;} textarea {resize : none;} </style> function calculate(){ var type; var newline=""; var sum=0; var toppings=""; if( document.getElementById("small").checked==true){ type="Small Pizza"; sum+=4; } if( document.getElementById("medium").checked==true){ type="Medium Pizza"; sum+=6; } if( document.getElementById("large").checked==true){ type="Large Pizza"; sum+=8; } if( document.getElementById("pepperoni").checked==true){ toppings=toppings+"pepperoni, "; sum+=0.75; } if( document.getElementById("olives").checked==true){ toppings=toppings+"olives, "; sum+=0.6; } if( document.getElementById("sausage").checked==true){ toppings=toppings+"sausage, "; sum+=0.75; } if( document.getElementById("peppers").checked==true){ toppings=toppings+"peppers, "; sum+=0.5; } if( document.getElementById("onions").checked==true){ toppings=toppings+"onions, "; sum+=0.5; } if( document.getElementById("cheese").checked==true){ toppings=toppings+"Cheese Only, "; } var length = toppings.length; toppings = toppings.slice(0,length-2); document.getElementById("opta").innerHTML = type+newline+"Toppings:"+newline+toppings+newline+"Price - $"+sum; } Joe's Pizza Palace On-line Order Form <p id = "op" class = "outp" > <b /> Select the size Pizza you want: &nbsp;&nbsp; <input type="radio" name = "size" id="small" value = "small"> Small - $4.00 <b /> <input type="radio" name = "size" id="medium" value = "medium"> Medium - $6.00 <b /> <input type="radio" name = "size" id="large" value = "large"> Large - $8.00 <b /> </p> <p id = "op1" class = "outp" > <b /> Select the toppings: &nbsp;&nbsp; <input type="checkbox" name = "size" id="pepperoni" value = "pepperoni"> Pepperoni ($0.75) <b /> <input type="checkbox" name = "size" id="olives" value = "olives"> Olives ($0.60) <b /> <input type="checkbox" name = "size" id="sausage" value = "sausage"> Sausage ($0.75) <b /> <br /> <input type="checkbox" name = "size" id="peppers" value = "peppers"> Peppers ($0.50) <b /> <input type="checkbox" name = "size" id="onions" value = "onions"> Onions ($0.50) <b /> <input type="checkbox" name = "size" id="cheese" value = "cheese"> Cheese Only <b /> To obtain the price of your order click on the price button below: <br /><br /> <input type="button" align = "left" onclick="calculate();" value="Price (Submit Button)"/> <input type="reset" align = "left" value="Clear Form"/> <br /><br /> <textarea class="outp3" id="opta" style="border-color:black;" rows="6" cols="40" > </textarea>

    Read the article

  • Best way to add an extra (nested) form in the middle of a tabbed form

    - by Scharrels
    I've got a web application, consisting mainly of a big form with information. The form is split into multiple tabs, to make it more readable for the user: <form> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2">A big table with a lot of input rows</div> </div> </form> The form is dynamically extended (extra rows are added to the tables). Every 10 seconds the form is serialized and synchronized with the server. I now want to add an interactive form on one of the tabs: when a user enters a name in a field, this information is sent to the server and an id associated with that name is returned. This id is used as an identifier for some dynamically added form fields. A quick sketchup of such a page would look like this: <form action="bigform.php"> <div id="tabs"> <ul> <li><a href="#tab1">Tab1</a></li> <li><a href="#tab2">Tab2</a></li> </ul> <div id="tab1">A big table with a lot of input rows</div> <div id="tab2"> <div class="associatedinfo"> <p>Information for Joe</p> <ul> <li><input name="associated[26][]" /></li> <li><input name="associated[26][]" /></li> </ul> </div> <div class="associatedinfo"> <p>Information for Jill</p> <ul> <li><input name="associated[12][]" /></li> <li><input name="associated[12][]" /></li> </ul> </div> <div id="newperson"> <form action="newform.php"> <p>Add another person:</p> <input name="extra" /><input type="submit" value="Add" /> </form> </div> </div> </div> </form> The above will not work: nested forms are not allowed in HTML. However, I really need to display the form on that tab: it's part of the functionality of that page. I also want the behaviour of a separate form: when the user hits return in the form field, the "Add" submit button is pressed and a submit action is triggered on the partial form. What is the best way to solve this problem?

    Read the article

  • Survey Data Model - How to avoid EAV and excessive denormalization?

    - by AlexDPC
    Hi everyone, My database skills are mediocre at best and I have to design a data model for survey data. I have spent some thoughts on this and right now I feel that I am stuck between some kind of EAV model and a design involving hundreds of tables, each with hundreds of columns (and thousands of records). There must be a better way to do this and I hope that the wise folks on this forum can help me. I have already searched various forums, but I couldn't really find a solution. If it has already been given elsewhere, please excuse me and provide me with a link so I can read it up. Some assumptions about the data I have to deal with: Each survey consists of 1 to n questionnaires Each questionnaire consists of 100-2,000 questions (please ignore that 2,000 questions really sound like a lot to answer...) Questions can be of various types: multiple-choice, free text, a number (like age, income, percentages, ...) Each survey involves 10-200 countries (These are not the respondents. The respondents are actually people in the countries.) Depending on the type of questionnaire, each questionnaire is answered by 100-20,000 respondents per country. A country can adapt the questionnaires for a survey, i.e. add, remove or edit questions The data for one country is gathered in a separate database in that country. There is no possibility for online integration from the start. The data for all countries has to be integrated later. This means for example, if a country has deleted a question, that data must somehow be derived from what they sent in order to achieve a uniform design across all countries I will have to write the integration and cleaning software, which will need to work with every country's data In the end the data needs to be exported to flat files, one rectangular grid per country and questionnaire. I have already discussed this topic with people from various backgrounds and have not come to a good solution yet. I mainly got two kinds of opinions. The domain experts, who are used to working with flat files (spreadsheet-style) for data processing and analysis vote for a denormalized structure with loads of tables and columns as I described above (1 table per country and questionnaire). This sounds terrible to me, because I learned that wide tables are to be avoided, it will be annoying to determine which columns are actually in a table when working with it, the database will become cluttered with hundreds of tables (or I even need to set up multiple databases, each with a similar yet a bit differetn design), etc. O-O-programmers vote for a strongly "normalized" design, which would effectively lead to a central table containing all the answers from all respondents to all questions. This table would either need to contain a column of type sql_variant type or multiple answer columns with different types to store answers of different types (multiple choice, free text, ..). The former would essentially be a EAV model. I tend to follow Joe Celko here, who strongly discourages its use (he calls it OTLT or "One True Lookup Table"). The latter would imply that each row would contain null cells for the not applicable types by design. Another alternative I could think of would be to create one table per answer type, i.e., one for multiple-choice questions, one for free text questions, etc.. That's not so generic, it would lead to a lot of union joins, I think and I would have to add a table if a new answer type is invented. Sorry for boring you with all this text and thank you for your input! Cheers, Alex PS: I asked the same question here: http://www.eggheadcafe.com/community/aspnet/13/10242616/survey-data-model--how-to-avoid-eav-and-excessive-denormalization.aspx

    Read the article

  • Div width: auto and IE

    - by Andrew Heath
    I'm using the jQuery qTip to show individual users and their votes when an average rating is mousedover. qTip calls a PHP file which grabs all the users and votes for the item from the MySQL database and builds a 3 column table, which appears as the tooltip. In Firefox, the tooltip displays properly. In IE7 (haven't tested on IE8 yet), the tooltip is the proper height, but the width is only 2 or 3 characters - not the entire table. If I set the width of the div to a fixed number, say width: 300px; I can coax IE into displaying it properly. However, the length of my users' names varies considerably, and I'd rather not nail down the div to its maximum possible width and then have a crapload of whitespace when you look at an item voted on only by "Joe". Using width: auto; has no effect in IE7. Are there alternatives? Sorry if I've overlooked a similar question. I searched for a bit before posting but didn't find anything suitable. EDIT TO ADD CODE: <div style="-moz-border-radius: 0pt 0pt 0pt 0pt; position: absolute; width: 358px; display: none; top: 384.617px; left: 463.5px; z-index: 6000;" class="qtip qtip-defaults" qtip="0"> <div style="position: relative; overflow: hidden; text-align: left;" class="qtip-wrapper"> <div style="overflow: hidden; background: none repeat scroll 0% 0% white; border: 1px solid rgb(211, 211, 211);" class="qtip-contentWrapper"> <div class="qtip-content qtip-content" style="background: none repeat scroll 0% 0% white; color: rgb(17, 17, 17); overflow: hidden; text-align: left; padding: 5px 9px;"> <div id="WhoResults"> <table> <tbody> <tr> <td>guy1</td> <td>guy2</td> <td>guy3</td> </tr> <tr> <td>guy4</td> <td>guy5</td> <td>guy6</td> </tr> </tbody> </table> </div> </div> </div> </div> </div> I have applied no CSS styling. That's all been handled by qTip. I tried to format it as best I could. Thanks for any help you can provide.

    Read the article

< Previous Page | 95 96 97 98 99 100 101  | Next Page >