Search Results

Search found 10838 results on 434 pages for 'adf task flow'.

Page 96/434 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • BPM 11g and Human Workflow Shadow Rows by Adam Desjardin

    - by JuergenKress
    During the OFM Forum last week, there were a few discussions around the relationship between the Human Workflow (WF_TASK*) tables in the SOA_INFRA schema and BPMN processes.  It is important to know how these are related because it can have a performance impact.  We have seen this performance issue several times when BPMN processes are used to model high volume system integrations without knowing all of the implications of using BPMN in this pattern. Most people assume that BPMN instances and their related data are stored in the CUBE_*, DLV_*, and AUDIT_* tables in the same way that BPEL instances are stored, with additional data in the BPM_* tables as well.  The group of tables that is not usually considered though is the WF* tables that are used for Human Workflow.  The WFTASK table is used by all BPMN processes in order to support features such as process level comments and attachments, whether those features are currently used in the process or not. For a standard human task that is created from a BPMN process, the following data is stored in the WFTASK table: One row per human task that is created The COMPONENTTYPE = "Workflow" TASKDEFINITIONID = Human Task ID (partition/CompositeName!Version/TaskName) ACCESSKEY = NULL Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki

    Read the article

  • Web workflow solution - how should I approach the design?

    - by Tom Pickles
    We've been tasked with creating a web based workflow tool to track change management. It has a single workflow with multiple synchronous tasks for the most part, but branch out at a point to tasks running in parallel which meet up later on. There will be all sorts of people using the application, and all of them will need to see their outstanding tasks for each change, but only theirs, not others. There will also be a high level group of people who oversee all changes, so need to see everything. They will need to see tasks which have not been done in the specified time, who's responsible etc. The data will be persisted to a SQL database. It'll all be put together using .Net. I've been trying to learn and implement OOP into my designs of late, but I'm wondering if this is moot in this instance as it may be better to have the business logic for this in stored procedures in the DB. I could use POCO's, a front end layer and a data access layer for the web application and just use it as a mechanism for CRUD actions on the DB, then use SP's fired in the DB to apply the business rules. On the other hand, I could use an object oriented design within the web app, but as the data in the app is state-less, is this a bad idea? I could try and model out the whole application into a class structure, implementing interfaces, base classes and all that good stuff. So I would create a change class, which contained a list of task classes/types, which defined each task, and implement an ITask interface etc. Put end-user types into the tasks to identify who should be doing what task. Then apply all the business logic in the respective class methods etc. What approach do you guys think I should be using for this solution?

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • CodePlex Daily Summary for Friday, August 01, 2014

    CodePlex Daily Summary for Friday, August 01, 2014Popular ReleasesQuickMon: Version 3.20: Added a 'Directory Services Query' collector agent. This collector allows for querying Active Directory using simple DirectorySearcher queries.Grunndatakvalitet: Initial working: Show Altinn metadata in Excel. To get a live list you need to run the sql script on a server and update the connection string in ExcelMultiple Threads TCP Server: Project: this Project is based on VS 2013, .net freamwork 4.0, you can open it by vs 2010 or laterEssence#: Philemon-2 (Alpha Build 18): The Philemon-2 release fixes bugs, most of which have to do with the binding of .Net assemblies. Configuration profiles have been added in order to help solve some of the .Net assembly binding issues. A configuration profile is persistently stored as a folder in the filesystem. It must be located in the folder %EssenceSharpPath%\Config. The folder name must use ".profile" as its file extension. A configuration profile folder may (optionally) contain one or more files--each with its own form...DataBind: DataBind 1.0: - Efficiency ImprovedConsole Progress Bar: ConsoleProgressBar-0.3: - Validate ProgressInPercentageSpace Engineers Server Manager: SESM V1.10: V1.10 - New auto update system ! You won't miss any SE Update now ^^ To learn more : https://sesm.codeplex.com/wikipage?title=Activating%20the%20auto%20update%20system - Moved the serve-wide configuration options to an other file than web.config. It should solve the "my parametters are errased by each SESM update" ! Warning ! If you upgrade from any previous version, you will need to copy-pase your parameters from your old web.config into the new SESM.config (this file is generated by acc...Accesorios de sitios Torrent en Español para Synology Download Station: Pack de Torrents en Español 6.0.0: Agregado los módulos de DivXTotal, el módulo de búsqueda depende del de alojamiento para bajar las series Utiliza el rss: http://www.divxtotal.com/rss.php DbEntry.Net (Leafing Framework): DbEntry.Net 4.2: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 4.0+. It has clearly and easily programing interface for ORM and sql directly, and supoorted Access, Sql Server, MySql, SQLite, Firebird, PostgreSQL and Oracle. It also provide a Ruby On Rails style MVC framework. Asp.Net DataSource and a simple IoC. DbEntry.Net.v4.2.Setup.zip include the setup package. DbEntry.Net.v4.2.Src.zip include source files and unit tests. DbEntry.Net.v4.2.Samples.zip ...Recaptcha for .NET: Recaptcha for .NET v1.5: What's NewMinor bug fixes Support for legacy .NET framework 4.0 and ASP.NET MVC 4. Support for .NET Framework 4.5.1.Azure Storage Explorer: Azure Storage Explorer 6 Preview 1: Welcome to Azure Storage Explorer 6 Preview 1 This is the first release of the latest Azure Storage Explorer, code-named Phoenix. What's New?Here are some important things to know about version 6: Open Source Now being run as a full open source project. Full source code on CodePlex. Collaboration encouraged! Updated Code Base Brand-new code base (WPF/C#/.NET 4.5) Visual Studio 2013 solution (previously VS2010) Uses the Task Parallel Library (TPL) for asynchronous background operat...Wsus Package Publisher: release v1.3.1407.29: Updated WPP to recognize the very latest console version. Some files was missing into the latest release of WPP which lead to crash when trying to make a custom update. Add a workaround to avoid clipboard modification when double-clicking on a label when creating a custom update. Add the ability to publish detectoids. (This feature is still in a BETA phase. Packages relying on these detectoids to determine which computers need to be updated, may apply to all computers).SSIS Connector for Sharepoint Online: Sharepoint_Online_ConnectorsV2.3.5: V2.3.5 Removed validation for user name where it was accepting user name with only ".com" V2.3.1 Added support for column with string array type. (For source only) V2.3 Added support for delete operation. just map ID columns and choose operation type as "Delete" Any other Operation Type will be taken as Insert\Update MultiUser type will be taken as User field and single user will be added, still figuring out how to add multi user field (whatever available on internet not helping :( ) V2.2 ...Artezio SharePoint 2013 Workflow Activities for VS & Actions for Designer: SharePoint Designer 2013 custom workflow actions 1.0: SharePoint Designer 2013 custom workflow actions that work with permissions. Add Role Assignment Add Role Assignments Delete Role Assignments Get Role Definition Id Get Role Definition Id By Role Type Id Reset Role Inheritance Artezio SharePoint Consulting & DevelopmentVG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityTEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleNew Projects.NET to C/C++ Interop Sample: Reference projects that demonstrate how to invoke C/C++ code from .NET using C#.Aircraft Guidance Systems: Software for RC model control and navigationDNN Upgrade Wizard: Provides a simple method for upgrading a DNN siteDrop and Create indexes SSIS Task: This is a simple SSIS Control Flow Task which can be used to drop and re-create all non-primary key indexes associated with specified table.Dynamics CRM 2013 Global Advanced Find: Dynamics CRM 2013 Global Advanced Find adds a button to the global navigation allowing users to launch an advanced find window from anywhere in the system.itsanewproject: sdlfjsldjfJNMSJW: hahaOmni Kits: A library tend to help in projects for any purpose while keeping small.Phase1Week1: Learning how to post, publish, and patch my commits to CodePlex with homework using bootsrap and font awesome.PL_Last1: Pll1Tabletoparmy: Tabletoparmy ist ein Tool um Tabletop Miniaturen zu Verwalten. Aktuell gibt es nur die Möglichkeit ein X-Wing Turnier zu verwalten.TalesAndTiles: Tile based game developed with Microsoft XNA Framework 4.0??????: ?????????

    Read the article

  • Deploying Fusion Order Demo on 11.1.1.6 by Antony Reynolds

    - by JuergenKress
    Do you need to build a demo for a customer? Why not to use Fusion Order Demo (FOD) and modify it to do some extra things. Great idea, let me install it on one of my Linux servers I said "Turns out there are a few gotchas, so here is how I installed it on a Linux server with JDeveloper on my Windows desktop." Task 1: Install Oracle JDeveloper Studio I already had JDeveloper 11.1.1.6 with SOA extensions installed so this was easy. Task 2: Install the Fusion Order Demo Application First thing to do is to obtain the latest version of the demo from OTN, I obtained the R1 PS5 release. Gotcha #1 – my winzip wouldn’t unzip the file, I had to use 7-Zip. Task 3: Install Oracle SOA Suite On the domain modify the setDomainEnv script by adding “-Djps.app.credential.overwrite.allowed=true” to JAVA_PROPERTIES and restarting the Admin Server. Also set the JAVA_HOME variable and add Ant to the path. I created a domain with separate SOA and BAM servers and also set up the Node Manager to make it easier to stop and start components. Read the full blog post by Antony. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Fusion Order Demo on 11.1.1.6,Antony Reynolds,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • SSIS ForEachLoop Container

    - by Leonard Mwangi
    I recently had a client request to create an SSIS package that would loop through a set of data in SQL tables to allow them to complete their data transformation processes. Knowing that Integration Services does have ForEachLoop Container, I knew the task would be easy but the moment I jumped into it I figured there was no straight forward way to accomplish the task since for each didn’t really have a loop through the table enumerator. With the capabilities of integration Services, I was still confident that it was possible it was just a matter of creativity to get it done. I set out to discover what different ForEach Loop Editor Enumerators did and settled with Variable Enumerator.  Here is how I accomplished the task. 1.       Drop your ForEach Loop Container in your WorkArea. 2.       Create a few SSIS Variable that will contain the data. Notice I have assigned MyID_ID variable a value of “TEST’ which is not evaluated either. This variable will be assigned data from the database hence allowing us to loop. 3.       In the ForEach Loop Editor’s Collection select Variable Enumerator 4.       Once this is all set, we need a mechanism to grab the data from the SQL Table and assigning it to the variable. Fig: Select Top 1 record Fig: Assign Top 1 record to the variable 5.       Now all that’s required is a house cleaning process that will update the table that you are looping so that you can be able to grab the next record   A look of the complete package

    Read the article

  • Strict Pomodoro and other time management Chrome extensions

    - by kerry
    I have recently begun using the Pomodoro Technique to increase my productivity. However, I still find myself getting sucked in to the vortex of useless information that is the internet. With that in mind I began searching for a useful chrome extension to replace the Android Pomodoro app I have been using to manage my ‘doros. I even considered writing it myself. Luckily, I stumbled on one that had a similar featureset to what I was looking for. Strict Pomodoro is an excellent Chrome extension for practicing Pomodoro. Though lacking a few key features, such as the ability to set the duration of your pomodoros and breaks, it still has a key feature that helps me stay on task. It blocks time sucking websites. You can set filter lists and it will keep you from accessing them during a Pomodoro. Effectively reminding you to stay on task. Also, the author readily admits that it was quickly put together and new features may be added down the road. For now, it is still an excellent option. For those of you who do not practice Pomodoro but are trying to stay on task. The StayFocusd extension will effectively manage the amount of time you spend on useless (non-productive) sites. It also has a rich feature set that may be better for your work habits. OK, breaks over. Time to get back to work. 25 minutes at a time.

    Read the article

  • Displaying the Saved Pictures in the Windows Phone 8 emulator

    - by Laurent Bugnion
    One cool feature of the Windows Phone emulator is that it allows you to select pictures from your app (using the PhotoChooserTask) without having to try your app on a physical device. For example, this code (which I used in some of my recent presentations) will trigger the Photo Chooser UI to be displayed on the emulator too: private Action<IEnumerable<IImageFileInfo>> _callback; public void SelectFiles(Action<IEnumerable<IImageFileInfo>> callback) { var task = new PhotoChooserTask { ShowCamera = true }; task.Completed += TaskCompleted; _callback = callback; task.Show(); } void TaskCompleted(object sender, PhotoResult e) { if (e.Error == null && e.ChosenPhoto != null && _callback != null) { var fileName = e.OriginalFileName .Substring(e.OriginalFileName.LastIndexOf("\\") + 1); var info = new FileViewModel(e.ChosenPhoto, fileName); var infos = new List<IImageFileInfo> { info }; _callback(infos); } } In Windows Phone 8 however, when you execute this code, you will be shown an almost empty Photo Chooser UI: Notice that the “Saved Pictures” album is missing. At first I thought it was just not there at all, but you can actually restore it with the following steps: Press on the Windows button On the main screen, press on Photos Press on Albums Open the so called “8” photo album Press Back until you are back into your app and try again. This time you will see the saved pictures, and can perform your tests in more realistic conditions! Happy coding! Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • PostSharp deployment to build machine- use Setup installation, not NuGet package.

    - by Michael Freidgeim
    PostSharp has well documented different methods of installation. I've chosen installing NuGet packages, because according to  Deploying PostSharp into a Source Repository NuGet is the easiest way to add PostSharp to a project without installing the product on every machine. However it didn't work well for me. I've added PostSharp NuGet package to one project in the solution.  When I wanted to use PostSharp in other project, Visual Studio tab showed that PostSharp is not enabled for this project I've added the NuGet package to the new project, which installed a new version of the package in the new Packages subfolder. When I wanted to refer PostSharp from the third project, I've ended up with another version of PostSharp installed. Additionally multiple versions of Diagnostics were created. It definitely causes confusion and errors.   More problems we experienced on build server. According to Using PostSharp on a Build Server "If you chose to deploy PostSharp in the source repository, it does not need to be installed specifically on the build server. " It didn't work on our build server. I kept getting errors "The "AddIns" parameter is not supported by the "PostSharp21" task." and "The "DisableSystemBindingPolicies" parameter is not supported by the "PostSharp21" task."   From my experience the only way to have the latest version of PostSharp working on the build server is to install it using Setup as described in Deploying PostSharp with the Setup Program     Gael acknowledged the issues with possible version conflicts. see http://support.sharpcrafters.com/discussions/problems/388-the-postsharp21-task-failed-unexpectedly

    Read the article

  • Top Fusion Apps User Experience Guidelines & Patterns That Every Apps Developer Should Know About

    - by ultan o'broin
    We've announced the availability of the Oracle Fusion Applications user experience design patterns. Developers can get going on these using the Design Filter Tool (or DeFT) to select the best pattern for their context of use. As you drill into the patterns you will discover more guidelines from the Applications User Experience team and some from the Rich Client User Interface team too that are also leveraged in Fusion Apps. All are based on the Oracle Application Development Framework components. To accelerate your Fusion apps development and tailoring, here's some inside insight into the really important patterns and guidelines that every apps developer needs to know about. They start at a broad Fusion Apps information architecture level and then become more granular at the page and task level. Information Architecture: These guidelines explain how the UI of an Oracle Fusion application is constructed. This enables you to understand where the changes that you want to make fit into the oveall application's information architecture. Begin with the UI Shell and Navigation guidelines, and then move onto page-level design using the Work Areas and Dashboards guidelines. UI Shell Guideline Navigation Guideline Introduction to Work Areas Guideline Dashboards Guideline Page Content: These patterns and guidelines cover the most common interactions used to complete tasks productively, beginning with the core interactions common across all pages, and then moving onto task-specific ones. Core Across All Pages Icons Guideline Page Actions Guideline Save Model Guideline Messages Pattern Set Embedded Help Pattern Set Task Dependent Add Existing Object Pattern Set Browse Pattern Set Create Pattern Set Detail on Demand Pattern Set Editing Objects Pattern Set Guided Processes Pattern Set Hierarchies Pattern Set Information Entry Forms Pattern Set Record Navigation Pattern Set Transactional Search and Results Pattern Group Now, armed with all this great insider information, get developing some great-looking, highly usable apps! Let me know in the comments how things go!

    Read the article

  • Clients with multiple proxy and multithreading callbacks

    - by enzom83
    I created a sessionful web service using WCF, and in particular I used the NetTcpBinding binding. In addition to methods to initiate and terminate a session, other methods allow the client to send to one or more tasks to be performed (the results are returned via callback, so the service is duplex), but they also allow you to know the status of the service. Assuming you activate the same service on multiple endpoints, and assuming that the client knows these endpoints (for example, it could maintain a List of endpoints), the client should connect with one or more replicas of the same service. The client periodically updates the status of the service, so when it needs to perform a new task (the task is submitted by the user via UI), it selects the service currently less loaded and sends the task to it. Periodically, the client also initiates a maintenance procedure in order to disconnect from one or more overloaded service and in order to connect with new services. I created a client proxy using the svcutil tool. I wish each proxy can be used simultaneously by different threads, for example, in addition to the thread that submits the tasks using a proxy, there are also the following two threads which act periodically: a thread that periodically sends a request to the service in order to obtain the updated state; a thread that periodically selects a proxy to close and instantiates a new proxy to replace the closed one. To achieve these objectives, is it sufficient to create an array of proxies and manage their opening and closing in separate threads? I think I read that the proxy method calls are thread safe, so I would not need to perform a lock before requesting updates to the service. However, when the maintenance procedure (which is activated on its own thread) decides to close a proxy, should I perform a lock? Finally, each proxy is also associated with an object that implements the callback interface for the service: are the callbacks (invoked on the client) executed on different threads on the client? I would like to wrap the management of the proxy in one or more classes so that it can then easily manage within a WPF application.

    Read the article

  • Mobile App Notifications in the Enterprise Space: UX Considerations

    - by ultan o'broin
    Here is a really super website of UX patterns for Android: Android Patterns. I was particularly interested in the event-driven notification patterns (aka status bar notifications to developers). Android - unlike iOS (i.e., the iPhone) - offers a superior centralized notifications system for users.   (Figure copyright Android Patterns)   Research in the enterprise applications space shows how users on-the-go, prefer this approach, as: Users can manage their notification alerts centrally, across all media, apps and for device activity, and decide the order in which to deal with them, and when. Notifications, unlike messages in a dialog or information message in the UI, do not block a task flow (and we need to keep task completion to under three minutes). See the Anti-Patterns slideshare presentation on this blocking point too. These notifications must never interrupt a task flow by launching an activity from the background. Instead, the user can launch an activity from the notification. What users do need is the ability to filter this centralized approach, and to personalize the experience of which notifications are added, what the reminder is, ability to turn off, and so on. A related point concerning notifications is when used to provide users with a record of actions then you can lighten up on lengthy confirmation messages that pop up (toasts in the Android world) used when transactions or actions are sent for processing or into a workflow. Pretty much all the confirmation needs to say is the action is successful along with key data such as dollar amount, customer name, or whatever. I am a user of Android (Nexus S), BlackBerry (Curve), and iOS devices (iPhone 3GS and 4). In my opinion, the best notifications user experience for the enterprise user is offered by Android. Blackberry is good, but not as polished and way clunkier than Android’s. What you get on the iPhone, out of the box, is useless in the enterprise. Technorati Tags: Android,iPhone,Blackerry,messages,usablility,user assistance,userexperience,Oracle,patterns,notifications,alerts

    Read the article

  • How to learn to deliver quality software designs when working on a tight deadline?

    - by chester89
    I read many books about how to design great software, but I kind of struggle to come up with a good design decisions when it comes to business apps, especially when the timeframe is tough. In the company I currently work for, the following situation happen all the time: my teamlead tells me that there's a task to do, I call some guy or a girl from business who tells me exactly what is it they want, and then I start coding. The task always fits in some existing application (we do only web apps or web services), usually it's purpose is to pull data from one datasource and put into the other one, with some business logic attached in the process. I start coding and then, after spending some time on a problem, my code didn't work as expected - either because of technical mistake or my lack of knowledge of the domain. The business is ringing me 2-3 times a day to hurry me up. I ask my team lead to help, he comes up, sees my code and goes like 'What's this?'. Then he throws away about half of my code, including all the design decisions I made, writes 2-3 methods that does the job (each of them usually 200-300 lines long or more, by the way), and task is complete, code works as it should have. The guy is smarter than me, obviously, and I'm aware of that. My goal is to be better software developer, that means write better code, not finish the job quicker with some crappy code. And the thing is, when I have enough time to tackle a problem, I can come up with a design that is good (in my opinion, of course), but I fall short to do so when I'm on a tight deadline. What should I do? I am fully aware that it's rather vague explanation, but please bear with me

    Read the article

  • Architecture for dashboard showing aggregated stats [on hold]

    - by soulnafein
    I'd like to know what are common architectural pattern for the following problem. Web application A has information on sales, users, responsiveness score, etc. Some of this information are computationally intensive and or have a complex business logic (e.g. responsiveness score). I'm building a separate application (B) for internal admin tasks that modifies data in web application A and report on data from web application A. For writing I'm planning to use a restful api. E.g. create a new entity, update entity, etc. In application B I'd like to show some graphs and other aggregate data for the previous 12 months. I'm planning to store the aggregate data for each month in redis. Some data should update more often, e.g every 10 minutes. I can think of 3 ways of doing this. A scheduled task in app B that connects to an api of app A that provides some aggregated data. Then app B stores it in Redis and use that to visualise pages. Cons: it makes complex calculation within a web request, requires lot's of work e.g. api server and client, storing, etc., pros: business logic still lives in app A. A scheduled task in app A that aggregates data in an non-web process and stores it directly in Redis to be accessed by app B. A scheduled task in app A that aggregates data in a non-web process and uses an api in app B to save it. I'd like to know if there is a well known architectural solution to this type of problems and if not what are other pros/cons for the solution I've suggested?

    Read the article

  • Allowing connections initiated from outside

    - by Mark S. Rasmussen
    I've got an old Juniper SSG5 running ScreenOS 5.4.0r6.0. Once a day, more or less, it'll start randomly dropping packets at a rate of ~5-10%. We currently solve this issue by simply rebooting the unit, after which it resumes working in perfect condition. As this error has started appearing randomly, without any configuration or hardware changes, I'm assuming I've got an aging unit about to fail. As such, I've got a replacement SSG5 running ScreenOS 6.0. I've dumped the config on the 5.4 and imported it into a clean 6.0, and it seems to gladly accept it, and all my configuration seems to be A-OK. However, upon connecting the new unit, all outside-initiated connections seem to be blocked. If I browse our external IP from the inside, everything works perfectly, and it's not just port 80, SSH, Crashplan - all of our policies route correctly. All normal networking, initiated from the inside, work perfectly as well. If on the other hand I browse our external IP from the outside, everything is blocked. Barring differences between ScreenOS 5.4 and 6.0, the config is identical. Is there a setting somewhere that defines whether outside/inside initiated connections are allowed? unset key protection enable set clock timezone 1 set vrouter trust-vr sharable set vrouter "untrust-vr" exit set vrouter "trust-vr" unset auto-route-export exit set service "MyVOIP_UDP4569" protocol udp src-port 0-65535 dst-port 4569-4569 set service "MyVOIP_TCP22" protocol tcp src-port 0-65535 dst-port 22-22 set service "MyRDP" protocol tcp src-port 0-65535 dst-port 3389-3389 set service "MyRsync" protocol tcp src-port 0-65535 dst-port 873-873 set service "NZ_FTP" protocol tcp src-port 0-65535 dst-port 40000-41000 set service "NZ_FTP" + tcp src-port 0-65535 dst-port 21-21 set service "PPTP-VPN" protocol 47 src-port 2048-2048 dst-port 2048-2048 set service "PPTP-VPN" + tcp src-port 1024-65535 dst-port 1723-1723 set service "NZ_FMS_1935" protocol tcp src-port 0-65535 dst-port 1935-1935 set service "NZ_FMS_1935" + udp src-port 0-65535 dst-port 1935-1935 set service "NZ_FMS_8080" protocol tcp src-port 0-65535 dst-port 8080-8080 set service "CrashPlan Server" protocol tcp src-port 0-65535 dst-port 4280-4280 set service "CrashPlan Console" protocol tcp src-port 0-65535 dst-port 4282-4282 unset alg sip enable set alg appleichat enable unset alg appleichat re-assembly enable set alg sctp enable set auth-server "Local" id 0 set auth-server "Local" server-name "Local" set auth default auth server "Local" set auth radius accounting port 1646 set admin name "netscreen" set admin password "XXX" set admin auth web timeout 10 set admin auth dial-in timeout 3 set admin auth server "Local" set admin format dos set vip multi-port set zone "Trust" vrouter "trust-vr" set zone "Untrust" vrouter "trust-vr" set zone "DMZ" vrouter "trust-vr" set zone "VLAN" vrouter "trust-vr" set zone "Untrust-Tun" vrouter "trust-vr" set zone "Trust" tcp-rst set zone "Untrust" block unset zone "Untrust" tcp-rst set zone "MGT" block unset zone "V1-Trust" tcp-rst unset zone "V1-Untrust" tcp-rst set zone "DMZ" tcp-rst unset zone "V1-DMZ" tcp-rst unset zone "VLAN" tcp-rst set zone "Untrust" screen tear-drop set zone "Untrust" screen syn-flood set zone "Untrust" screen ping-death set zone "Untrust" screen ip-filter-src set zone "Untrust" screen land set zone "V1-Untrust" screen tear-drop set zone "V1-Untrust" screen syn-flood set zone "V1-Untrust" screen ping-death set zone "V1-Untrust" screen ip-filter-src set zone "V1-Untrust" screen land set interface ethernet0/0 phy full 100mb set interface ethernet0/3 phy full 100mb set interface ethernet0/4 phy full 100mb set interface ethernet0/5 phy full 100mb set interface ethernet0/6 phy full 100mb set interface "ethernet0/0" zone "Untrust" set interface "ethernet0/1" zone "Null" set interface "bgroup0" zone "Trust" set interface "bgroup1" zone "Trust" set interface "bgroup2" zone "Trust" set interface bgroup2 port ethernet0/2 set interface bgroup0 port ethernet0/3 set interface bgroup0 port ethernet0/4 set interface bgroup1 port ethernet0/5 set interface bgroup1 port ethernet0/6 unset interface vlan1 ip set interface ethernet0/0 ip 215.173.182.18/29 set interface ethernet0/0 route set interface bgroup0 ip 192.168.1.1/24 set interface bgroup0 nat set interface bgroup1 ip 192.168.2.1/24 set interface bgroup1 nat set interface bgroup2 ip 192.168.3.1/24 set interface bgroup2 nat set interface ethernet0/0 gateway 215.173.182.17 unset interface vlan1 bypass-others-ipsec unset interface vlan1 bypass-non-ip set interface ethernet0/0 ip manageable set interface bgroup0 ip manageable set interface bgroup1 ip manageable set interface bgroup2 ip manageable set interface bgroup0 manage mtrace unset interface bgroup1 manage ssh unset interface bgroup1 manage telnet unset interface bgroup1 manage snmp unset interface bgroup1 manage ssl unset interface bgroup1 manage web unset interface bgroup2 manage ssh unset interface bgroup2 manage telnet unset interface bgroup2 manage snmp unset interface bgroup2 manage ssl unset interface bgroup2 manage web set interface ethernet0/0 vip 215.173.182.19 2048 "PPTP-VPN" 192.168.1.131 set interface ethernet0/0 vip 215.173.182.19 + 4280 "CrashPlan Server" 192.168.1.131 set interface ethernet0/0 vip 215.173.182.19 + 4282 "CrashPlan Console" 192.168.1.131 set interface ethernet0/0 vip 215.173.182.22 22 "MyVOIP_TCP22" 192.168.2.127 set interface ethernet0/0 vip 215.173.182.22 + 4569 "MyVOIP_UDP4569" 192.168.2.127 set interface ethernet0/0 vip 215.173.182.22 + 3389 "MyRDP" 192.168.2.202 set interface ethernet0/0 vip 215.173.182.22 + 873 "MyRsync" 192.168.2.201 set interface ethernet0/0 vip 215.173.182.22 + 80 "HTTP" 192.168.2.202 set interface ethernet0/0 vip 215.173.182.22 + 2048 "PPTP-VPN" 192.168.2.201 set interface ethernet0/0 vip 215.173.182.22 + 8080 "NZ_FMS_8080" 192.168.2.216 set interface ethernet0/0 vip 215.173.182.22 + 1935 "NZ_FMS_1935" 192.168.2.216 set interface bgroup0 dhcp server service set interface bgroup1 dhcp server service set interface bgroup2 dhcp server service set interface bgroup0 dhcp server auto set interface bgroup1 dhcp server auto set interface bgroup2 dhcp server auto set interface bgroup0 dhcp server option domainname companyalan set interface bgroup0 dhcp server option dns1 192.168.1.131 set interface bgroup1 dhcp server option domainname companyblan set interface bgroup1 dhcp server option dns1 192.168.2.202 set interface bgroup2 dhcp server option dns1 8.8.8.8 set interface bgroup2 dhcp server option wins1 8.8.4.4 set interface bgroup0 dhcp server ip 192.168.1.2 to 192.168.1.116 set interface bgroup1 dhcp server ip 192.168.2.2 to 192.168.2.116 set interface bgroup2 dhcp server ip 192.168.3.2 to 192.168.3.126 unset interface bgroup0 dhcp server config next-server-ip unset interface bgroup1 dhcp server config next-server-ip unset interface bgroup2 dhcp server config next-server-ip set interface "ethernet0/0" mip 215.173.182.21 host 192.168.2.202 netmask 255.255.255.255 vr "trust-vr" set interface "serial0/0" modem settings "USR" init "AT&F" set interface "serial0/0" modem settings "USR" active set interface "serial0/0" modem speed 115200 set interface "serial0/0" modem retry 3 set interface "serial0/0" modem interval 10 set interface "serial0/0" modem idle-time 10 set flow tcp-mss unset flow tcp-syn-check unset flow tcp-syn-bit-check set flow reverse-route clear-text prefer set flow reverse-route tunnel always set pki authority default scep mode "auto" set pki x509 default cert-path partial set pki x509 dn name "[email protected]" set dns host dns1 0.0.0.0 set dns host dns2 0.0.0.0 set dns host dns3 0.0.0.0 set address "Trust" "192.168.1.0/24" 192.168.1.0 255.255.255.0 set address "Trust" "192.168.2.0/24" 192.168.2.0 255.255.255.0 set address "Trust" "192.168.3.0/24" 192.168.3.0 255.255.255.0 set crypto-policy exit set ike respond-bad-spi 1 set ike ikev2 ike-sa-soft-lifetime 60 unset ike ikeid-enumeration unset ike dos-protection unset ipsec access-session enable set ipsec access-session maximum 5000 set ipsec access-session upper-threshold 0 set ipsec access-session lower-threshold 0 set ipsec access-session dead-p2-sa-timeout 0 unset ipsec access-session log-error unset ipsec access-session info-exch-connected unset ipsec access-session use-error-log set vrouter "untrust-vr" exit set vrouter "trust-vr" exit set l2tp default ppp-auth chap set url protocol websense exit set policy id 1 from "Trust" to "Untrust" "Any" "Any" "ANY" permit set policy id 1 exit set policy id 2 from "Untrust" to "Trust" "Any" "VIP(215.173.182.19)" "PPTP-VPN" permit traffic set policy id 2 exit set policy id 3 from "Untrust" to "Trust" "Any" "VIP(215.173.182.22)" "HTTP" permit log set policy id 3 set service "MyRDP" set service "MyRsync" set service "MyVOIP_TCP22" set service "MyVOIP_UDP4569" exit set policy id 6 from "Trust" to "Trust" "192.168.1.0/24" "192.168.2.0/24" "ANY" deny set policy id 6 exit set policy id 7 from "Trust" to "Trust" "192.168.2.0/24" "192.168.1.0/24" "ANY" deny set policy id 7 exit set policy id 8 from "Trust" to "Trust" "192.168.3.0/24" "192.168.1.0/24" "ANY" deny set policy id 8 exit set policy id 9 from "Trust" to "Trust" "192.168.3.0/24" "192.168.2.0/24" "ANY" deny set policy id 9 exit set policy id 10 from "Untrust" to "Trust" "Any" "MIP(215.173.182.21)" "NZ_FTP" permit set policy id 10 exit set policy id 11 from "Untrust" to "Trust" "Any" "VIP(215.173.182.22)" "PPTP-VPN" permit set policy id 11 exit set policy id 12 from "Untrust" to "Trust" "Any" "VIP(215.173.182.22)" "NZ_FMS_1935" permit set policy id 12 set service "NZ_FMS_8080" exit set policy id 13 from "Untrust" to "Trust" "Any" "VIP(215.173.182.19)" "CrashPlan Console" permit set policy id 13 set service "CrashPlan Server" exit set nsmgmt bulkcli reboot-timeout 60 set ssh version v2 set config lock timeout 5 unset license-key auto-update set telnet client enable set snmp port listen 161 set snmp port trap 162 set vrouter "untrust-vr" exit set vrouter "trust-vr" unset add-default-route exit set vrouter "untrust-vr" exit set vrouter "trust-vr" exit Note that I've previously posted a similar question (pertaining to the same device & replacement, but ultimately caused by a malfunctioning switch, and thus clouding the current issue): Outbound traffic being blocked for MIP/VIPped servers (Juniper SSG5)

    Read the article

  • Cannot integrate Gallio MBUnit with Team City

    - by Bernard Larouche
    I have been trying to get my MBUnit tests suite to work on Team City for many days now without any success. My solution builds no problem. The program is with my tests. After googling for Gallio integration with Team City I tried many ways to make this thing work and I think I am close but need help. I have included the gallio bin directory to my repository and also on my TC Server. Here is my build runner set up in Team City : Build runner : MSBuild Build file path : Myproject.msbuild Targets : RebuildSolution RunTests Here is Myproject.msbuild file I created and included in the Source control trunk directory : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the tests assemblies --> <ItemGroup> <TestAssemblies Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="false" Assemblies="@(TestAssemblies)" RunnerExtensions="TeamCityExtension,Gallio.TeamCityIntegration"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Here are the errors displayed by Team City : error MSB4064: The "Assemblies" parameter is not supported by the "Gallio" task. Verify the parameter exists on the task, and it is a settable public instance property error MSB4063: The "Gallio" task could not be initialized with its input parameters. Thanks for your help

    Read the article

  • How do I copy all referenced jars of an eclipse project using ant4eclipse?

    - by interstellar
    I've tried like this (link): <taskdef resource="net/sf/antcontrib/antlib.xml" /> <taskdef resource="net/sf/ant4eclipse/antlib.xml" /> <target name="copy_jars"> <getEclipseClasspath workspace="${basedir}/.." projectname="MyProject" property="classpath" relative="false" runtime="true" pathseparator="#" /> <!-- iterate over all classpath entries --> <foreach list="${classpath}" delimiter="#" target="copy_jar_file" param="classpath.entry" /> </target> <target name="copy_jar_file"> <!-- check if current is a .jar-file ... --> <if> <isfileselected file="${classpath.entry}"> <filename name="**/*.jar" /> </isfileselected> <then> <!-- copy the jar file to a destination directory --> <copy file="${classpath.entry}" tofile="${dest.dir}"/> </then> </if> </target> But I get the exception: [getEclipseClasspath] net.sf.ant4eclipse.model.FileParserException: Could not parse plugin project 'E:\...\MyProject' since it contains neither a Bundle-Manifest nor a plugin.xml! [getEclipseClasspath] at net.sf.ant4eclipse.model.pdesupport.plugin.PluginDescriptorParser.parseEclipseProject(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.model.pdesupport.plugin.PluginProjectRoleIdentifier.applyRole(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.model.roles.RoleIdentifierRegistry.applyRoles(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.ProjectFactory.readProjectFromWorkspace(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.AbstractClasspathResolver.resolveEclipseClasspathEntry(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.AbstractClasspathResolver.resolveProjectClasspath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.tools.resolver.ProjectClasspathResolver.resolveProjectClasspath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.ant.task.project.GetEclipseClassPathTask.resolvePath(Unknown Source) [getEclipseClasspath] at net.sf.ant4eclipse.ant.task.project.AbstractGetProjectPathTask.execute(Unknown Source) [getEclipseClasspath] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) [getEclipseClasspath] at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) [getEclipseClasspath] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) [getEclipseClasspath] at java.lang.reflect.Method.invoke(Method.java:597) [getEclipseClasspath] at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) [getEclipseClasspath] at org.apache.tools.ant.Task.perform(Task.java:348) [getEclipseClasspath] at org.apache.tools.ant.Target.execute(Target.java:357) [getEclipseClasspath] at org.apache.tools.ant.Target.performTasks(Target.java:385) [getEclipseClasspath] at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1329) [getEclipseClasspath] at org.apache.tools.ant.Project.executeTarget(Project.java:1298) [getEclipseClasspath] at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.EclipseDefaultExecutor.executeTargets(EclipseDefaultExecutor.java:32) [getEclipseClasspath] at org.apache.tools.ant.Project.executeTargets(Project.java:1181) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:423) [getEclipseClasspath] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) BUILD FAILED E:\...\build.xml:132: Exception whilst resolving the classpath of project MyProject! Reason: Could not parse plugin project 'E:\...\MyProject' since it contains neither a Bundle-Manifest nor a plugin.xml! I wan't to copy just the jars, not the referenced projects. Is there a way to parametrize the task getEclipseClasspath so it only gets the jars, not the projects?

    Read the article

  • Converting to async,await using async targeting package

    - by e4rthdog
    I have this: private void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; var task = Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); task.ContinueWith(t => { btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = t.Result.Description; lblItemCodeValue.Text = t.Result.Code; lblQuantityValue.Text = t.Result.AvailableQuantity.ToString(); },TaskScheduler .FromCurrentSynchronizationContext() ); LotFocus(true); } and i followed J. Skeet's advice to move into async,await in my .NET 4.0 app. I converted into this: private async void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; JDEItemLotAvailability itm = await Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = itm.Description; lblItemCodeValue.Text = itm.Code; lblQuantityValue.Text = itm.AvailableQuantity.ToString(); LotFocus(true); } It works fine. What confuses me is that i could do it without using Task but just the method of my Dal. But that means that i must have modified my Dal method, which is something i dont want? I would appreciate if someone would explain to me in "plain" words if what i did is optimal or not and why. Thanks P.s. My dal method public bool CheckLotExistF41021(string _lot, string _mcu, string _locn) { using (OleDbConnection con = new OleDbConnection(this.conString)) { OleDbCommand cmd = new OleDbCommand(); cmd.CommandText = "select lilotn from proddta.f41021 " + "where lilotn = ? and trim(limcu) = ? and lilocn= ?"; cmd.Parameters.AddWithValue("@lotn", _lot); cmd.Parameters.AddWithValue("@mcu", _mcu); cmd.Parameters.AddWithValue("@locn", _locn); cmd.Connection = con; con.Open(); OleDbDataReader rdr = cmd.ExecuteReader(); bool _retval = rdr.HasRows; rdr.Close(); con.Close(); return _retval; } }

    Read the article

  • drag and drop working funny when using variable draggables and droppables

    - by Lina
    Hi, i have some containers that contain some divs like: <div id="container1"> <div id="task1" onMouseOver="DragDrop("+1+");">&nbsp;</div> <div id="task2" onMouseOver="DragDrop("+2+");">&nbsp;</div> <div id="task3" onMouseOver="DragDrop("+3+");">&nbsp;</div> <div id="task4" onMouseOver="DragDrop("+4+");">&nbsp;</div> </div> <div id="container2"> <div id="task5" onMouseOver="DragDrop("+5+");">&nbsp;</div> <div id="task6" onMouseOver="DragDrop("+6+");">&nbsp;</div> </div> <div id="container3"> <div id="task7" onMouseOver="DragDrop("+7+");">&nbsp;</div> <div id="task8" onMouseOver="DragDrop("+8+");">&nbsp;</div> <div id="task9" onMouseOver="DragDrop("+9+");">&nbsp;</div> <div id="task10" onMouseOver="DragDrop("+10+");">&nbsp;</div> </div> i'm trying to drag tasks and drop them in one of the container divs, then reposition the dropped task so that it doesn't affect the other divs nor fall outside one of them and to do that i'm using the event onMouseOver to call the following function: function DragDrop(id) { $("#task" + id).draggable({ revert: 'invalid' }); for (var i = 0; i < nameList.length; i++) { $("#" + nameList[i]).droppable({ drop: function (ev, ui) { var pos = $("#task" + id).position(); if (pos.left <= 0) { $("#task" + id).css("left", "5px"); } else { var day = parseInt(parseInt(pos.left) / 42); var leftPos = (day * 42) + 5; $("#task" + id).css("left", "" + leftPos + "px"); } } }); } } where: nameList = [container1, container2, container3]; the drag is working fine, but the drop is not really, it's just a mess! any help please?? when i hardcode the id and the container, then it works beautifully, but as soon as i use id in drop then it begins to work funny! any suggestions??? thanks a million in advance Lina

    Read the article

  • Simple reminder for Android

    - by anta40
    I'm trying to make a simple timer. package com.anta40.reminder; import java.util.Timer; import java.util.TimerTask; import android.app.Activity; import android.os.Bundle; import android.widget.RadioGroup; import android.widget.TabHost; import android.widget.TextView; import android.widget.RadioGroup.OnCheckedChangeListener; import android.widget.TabHost.TabSpec; public class Reminder extends Activity{ public final int TIMER_DELAY = 1000; public final int TIMER_ONE_MINUTE = 60000; public final int TIMER_ONE_SECOND = 1000; Timer timer; TimerTask task; TextView tv; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.main); timer = new Timer(); task = new TimerTask() { @Override public void run() { tv = (TextView) findViewById(R.id.textview1); tv.setText("BOOM!!!!"); tv.setVisibility(TextView.VISIBLE); try { this.wait(TIMER_DELAY); } catch (InterruptedException e){ } tv.setVisibility(TextView.INVISIBLE); } }; TabHost tabs=(TabHost)findViewById(R.id.tabhost); tabs.setup(); TabSpec spec = tabs.newTabSpec("tag1"); spec.setContent(R.id.tab1); spec.setIndicator("Clock"); tabs.addTab(spec); spec=tabs.newTabSpec("tag2"); spec.setContent(R.id.tab2); spec.setIndicator("Settings"); tabs.addTab(spec); tabs.setCurrentTab(0); RadioGroup rgroup = (RadioGroup) findViewById(R.id.rgroup); rgroup.setOnCheckedChangeListener(new OnCheckedChangeListener() { @Override public void onCheckedChanged(RadioGroup group, int checkedId) { if (checkedId == R.id.om){ timer.schedule(task, TIMER_DELAY, 3*TIMER_ONE_SECOND); } else if (checkedId == R.id.twm){ timer.schedule(task, TIMER_DELAY, 6*TIMER_ONE_SECOND); } else if (checkedId == R.id.thm){ timer.schedule(task, TIMER_DELAY, 9*TIMER_ONE_SECOND); } } }); } } Each time I click a radio button, the timer should start, right? But why it doesn't start?

    Read the article

  • ESRI frameworks: java vs javascript

    - by Luke
    I'm about to develop a web mapping application with ESRI Products like ArcGIS Server and Image Server. I can't find a good comparison between the Java Web ADF and the Javascript Framework. They're of course different because one is a full environment and the other is only client side but it's much more concise and the step to start is minimal. Another problem is that the Java Web ADF is not compatible with our current application server (JBoss 4.2.2) and require an old 4.0.2 version. Someone out there has experience that can help me? Many thanks.

    Read the article

  • DTracing TCP congestion control

    - by user12820842
    In a previous post, I showed how we can use DTrace to probe TCP receive and send window events. TCP receive and send windows are in effect both about flow-controlling how much data can be received - the receive window reflects how much data the local TCP is prepared to receive, while the send window simply reflects the size of the receive window of the peer TCP. Both then represent flow control as imposed by the receiver. However, consider that without the sender imposing flow control, and a slow link to a peer, TCP will simply fill up it's window with sent segments. Dealing with multiple TCP implementations filling their peer TCP's receive windows in this manner, busy intermediate routers may drop some of these segments, leading to timeout and retransmission, which may again lead to drops. This is termed congestion, and TCP has multiple congestion control strategies. We can see that in this example, we need to have some way of adjusting how much data we send depending on how quickly we receive acknowledgement - if we get ACKs quickly, we can safely send more segments, but if acknowledgements come slowly, we should proceed with more caution. More generally, we need to implement flow control on the send side also. Slow Start and Congestion Avoidance From RFC2581, let's examine the relevant variables: "The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK). Another state variable, the slow start threshold (ssthresh), is used to determine whether the slow start or congestion avoidance algorithm is used to control data transmission" Slow start is used to probe the network's ability to handle transmission bursts both when a connection is first created and when retransmission timers fire. The latter case is important, as the fact that we have effectively lost TCP data acts as a motivator for re-probing how much data the network can handle from the sending TCP. The congestion window (cwnd) is initialized to a relatively small value, generally a low multiple of the sending maximum segment size. When slow start kicks in, we will only send that number of bytes before waiting for acknowledgement. When acknowledgements are received, the congestion window is increased in size until cwnd reaches the slow start threshold ssthresh value. For most congestion control algorithms the window increases exponentially under slow start, assuming we receive acknowledgements. We send 1 segment, receive an ACK, increase the cwnd by 1 MSS to 2*MSS, send 2 segments, receive 2 ACKs, increase the cwnd by 2*MSS to 4*MSS, send 4 segments etc. When the congestion window exceeds the slow start threshold, congestion avoidance is used instead of slow start. During congestion avoidance, the congestion window is generally updated by one MSS for each round-trip-time as opposed to each ACK, and so cwnd growth is linear instead of exponential (we may receive multiple ACKs within a single RTT). This continues until congestion is detected. If a retransmit timer fires, congestion is assumed and the ssthresh value is reset. It is reset to a fraction of the number of bytes outstanding (unacknowledged) in the network. At the same time the congestion window is reset to a single max segment size. Thus, we initiate slow start until we start receiving acknowledgements again, at which point we can eventually flip over to congestion avoidance when cwnd ssthresh. Congestion control algorithms differ most in how they handle the other indication of congestion - duplicate ACKs. A duplicate ACK is a strong indication that data has been lost, since they often come from a receiver explicitly asking for a retransmission. In some cases, a duplicate ACK may be generated at the receiver as a result of packets arriving out-of-order, so it is sensible to wait for multiple duplicate ACKs before assuming packet loss rather than out-of-order delivery. This is termed fast retransmit (i.e. retransmit without waiting for the retransmission timer to expire). Note that on Oracle Solaris 11, the congestion control method used can be customized. See here for more details. In general, 3 or more duplicate ACKs indicate packet loss and should trigger fast retransmit . It's best not to revert to slow start in this case, as the fact that the receiver knew it was missing data suggests it has received data with a higher sequence number, so we know traffic is still flowing. Falling back to slow start would be excessive therefore, so fast recovery is used instead. Observing slow start and congestion avoidance The following script counts TCP segments sent when under slow start (cwnd ssthresh). #!/usr/sbin/dtrace -s #pragma D option quiet tcp:::connect-request / start[args[1]-cs_cid] == 0/ { start[args[1]-cs_cid] = 1; } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd tcps_cwnd_ssthresh / { @c["Slow start", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } tcp:::send / start[args[1]-cs_cid] == 1 && args[3]-tcps_cwnd args[3]-tcps_cwnd_ssthresh / { @c["Congestion avoidance", args[2]-ip_daddr, args[4]-tcp_dport] = count(); } As we can see the script only works on connections initiated since it is started (using the start[] associative array with the connection ID as index to set whether it's a new connection (start[cid] = 1). From there we simply differentiate send events where cwnd ssthresh (congestion avoidance). Here's the output taken when I accessed a YouTube video (where rport is 80) and from an FTP session where I put a large file onto a remote system. # dtrace -s tcp_slow_start.d ^C ALGORITHM RADDR RPORT #SEG Slow start 10.153.125.222 20 6 Slow start 138.3.237.7 80 14 Slow start 10.153.125.222 21 18 Congestion avoidance 10.153.125.222 20 1164 We see that in the case of the YouTube video, slow start was exclusively used. Most of the segments we sent in that case were likely ACKs. Compare this case - where 14 segments were sent using slow start - to the FTP case, where only 6 segments were sent before we switched to congestion avoidance for 1164 segments. In the case of the FTP session, the FTP data on port 20 was predominantly sent with congestion avoidance in operation, while the FTP session relied exclusively on slow start. For the default congestion control algorithm - "newreno" - on Solaris 11, slow start will increase the cwnd by 1 MSS for every acknowledgement received, and by 1 MSS for each RTT in congestion avoidance mode. Different pluggable congestion control algorithms operate slightly differently. For example "highspeed" will update the slow start cwnd by the number of bytes ACKed rather than the MSS. And to finish, here's a neat oneliner to visually display the distribution of congestion window values for all TCP connections to a given remote port using a quantization. In this example, only port 80 is in use and we see the majority of cwnd values for that port are in the 4096-8191 range. # dtrace -n 'tcp:::send { @q[args[4]-tcp_dport] = quantize(args[3]-tcps_cwnd); }' dtrace: description 'tcp:::send ' matched 10 probes ^C 80 value ------------- Distribution ------------- count -1 | 0 0 |@@@@@@ 5 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 0 512 | 0 1024 | 0 2048 |@@@@@@@@@ 8 4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@ 23 8192 | 0

    Read the article

  • transforming binary data using ssis and sql server 2008

    - by Rick
    Hello All - I have a task to import/transform and extract zipped binary files that contain both text data as well as embeded binary data. Within the data is data that is relational in nature and needs to be processed into a defined database structure. Currently I have a C# single threaded app that essentially grabs all the files from the directory (currently there is 13K files of varying sizes) and extracts the data on a single thread line by line inserts to the database. As you could imagine this is a very slow process and unacceptable. There are several different parsing routines used depending on the header record in the file. There are potentially upto a million rows per file when all the data is extracted to the row level of detail. Follow on task is to parse those rows into their appropriate tables based on is content. i.e. the textual content has to be parsed further into "buckets" of like data in the database. That about sums up the big picture. Now for the problem task list. How do i iterate through a packet of data using SSIS? In the app the file is decompressed and then is parsed using streams data type and byte arrays and is routed to the required parsing routine based on the header data of each packet. There is bit swapping involved as well. Should i wrap up the app code into a script task(s) and let it do the custom processing? The data is seperated by year and the sql server tables is partitioned by year as well. I need to be able to "catch" bad file data as well and process by hand most likely. Should i simply load the zipped file to sql as a blob and parse the file with T-SQL? Would that be multi threaded if done that way? Not sure how to do the parsing in tsql that is involved here. Which do you think would be faster? Potentially the data that is currently processed via files could come to us via a socket. Can SSIS collect that data in real time? How would i go about setting that up? Processing these new files from the directorys will become a daily task. I can manage the data once i get it to sql server. Getting it there in a timely fashion seems to be the long pole in the tent for me. I would appreciate any comments or suggestions from the group. Rick

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >