Search Results

Search found 32862 results on 1315 pages for 'ms project'.

Page 373/1315 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Best ASP.Net Host for Developers [closed]

    - by Tyler
    I would need it to allow me to host subdomains and multiple domains is a huge plus. Required: ASP.NET 2.0, 3.0, 3.5 Subdomain Hosting MS-SQL & MySQL Databases Want Multiple Domain Hosting ASP.NET 4.0 Ability to directly connect to MS SQL using SQL SMS

    Read the article

  • Query performs poorly unless a temp table is used

    - by Paul McLoughlin
    The following query takes about 1 minute to run, and has the following IO statistics: SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN TASK_REQUESTS AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN AND T3.TASK = 'UPDATE_MEM_BAL' GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'TRANSACTIONS'. Scan count 5977, logical reads 7527408, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TASK_REQUESTS'. Scan count 1, logical reads 11, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 58157 ms, elapsed time = 61437 ms. If I instead introduce a temporary table then the query returns quickly and performs less logical reads: CREATE TABLE #MyTable(RGN VARCHAR(20) NOT NULL, CD VARCHAR(20) NOT NULL, PRIMARY KEY([RGN],[CD])); INSERT INTO #MyTable(RGN, CD) SELECT RGN, CD FROM TASK_REQUESTS WHERE TASK='UPDATE_MEM_BAL'; SELECT T.RGN, T.CD, T.FUND_CD, T.TRDT, SUM(T2.UNITS) AS TotalUnits FROM dbo.TRANS AS T JOIN dbo.TRANS AS T2 ON T2.RGN=T.RGN AND T2.CD=T.CD AND T2.FUND_CD=T.FUND_CD AND T2.TRDT<=T.TRDT JOIN #MyTable AS T3 ON T3.CD=T.CD AND T3.RGN=T.RGN GROUP BY T.RGN, T.CD, T.FUND_CD, T.TRDT (4447 row(s) affected) Table 'Worktable'. Scan count 5974, logical reads 382339, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'TRANSACTIONS'. Scan count 4, logical reads 4547, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table '#MyTable________________________________________________________________000000000013'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 1420 ms, elapsed time = 1515 ms. The interesting thing for me is that the TASK_REQUEST table is a small table (3 rows at present) and statistics are up to date on the table. Any idea why such different execution plans and execution times would be occuring? And ideally how to change things so that I don't need to use the temp table to get decent performance? The only real difference in the execution plans is that the temp table version introduces an index spool (eager spool) operation.

    Read the article

  • Eclipse Debug Mode disrupting MSSQL Server 2005 Stored Procedure access

    - by Sathish
    We have a strange problem in our team. When a developer is using Eclipse in Debug mode, MS SQL Server 2005 blocks other developers from accessing a stored procedure. Debug session typically involves opening Hibernate session to persist an entity which could be accessing a stored procedure used for Primary key generation. Debugging is done in business logic code and rarely in JDBC stored procedure call. Is there any way to configure MS SQL server or the stored procedure so that other developers are not blocked?

    Read the article

  • How optimize queries with fully qualified names in t-sql?

    - by tomaszs
    Whe I call: select * from Database.dbo.Table where NAME = 'cat' It takes: 200 ms And when I change database to Database in Management Studio and call it without fully qualified name it's much faster: select * from Table where NAME = 'cat' It takes: 17 ms Is there any way to make fully qualified queries faster without changing database?

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • Putting indexes in separate filegroup kills our queries

    - by womp
    Can anyone shed some light on this? On our dev boxes, our database resides entirely in the PRIMARY filegroup, and everything works fine. On one of our production servers, recently upgraded from 2005 to 2008, we noticed it was performing slower than it should. On this machine, there are two filegroups - PRIMARY and INDEXES. Both filegroups contain 1 file per logical volume, one logical volume per CPU, (and each logical volume is a RAID 10 of 4 physical disks). We isolated a few queries that were performing fast on the dev boxes and slow (up to 40x slower) on the production machine. Turned out these queries were using the non-clustered indexes that resided in the INDEXES filegroup. Tweaking some of the queries to only use clustered indexes that were in the PRIMARY filegroup dropped their times back to normal. As a final confirmation, we redeployed the same database on the same machine to have everything in PRIMARY, and things went back to normal! Here's the statistics output of one of the queries, run identically on the machine with different filegroup configurations (table names changed to protect the innocent): FAST (everything in PRIMARY filegroup): (3 row(s) affected) Table '0'. Scan count 2, logical reads 14, ... Table '1'. Scan count 0, logical reads 0, ... Table '1'. Scan count 0, logical reads 0, ... Table '2'. Scan count 2, logical reads 7, ... Table '3'. Scan count 2, logical reads 1012, ... Table '4'. Scan count 1, logical reads 3, ... SQL Server Execution Times: CPU time = 437 ms, elapsed time = 445 ms. SLOW (indexes split into their own filegroup): (3 row(s) affected) Table '0'. Scan count 209, logical reads 428, ... Table '1'. Scan count 0, logical reads 0,... Table '2'. Scan count 1021, logical reads 9043,.... Table '3'. Scan count 209, logical reads 105754, .... Table '4'. Scan count 0, logical reads 0, .... Table '5'. Scan count 1, logical reads 695, ... **Table '#46DA8CA9'. Scan count 205, logical reads 205, ...** Table '6'. Scan count 6, logical reads 436, ... Table '7'. Scan count 1, logical reads 12,.... SQL Server Execution Times: CPU time = 17581 ms, elapsed time = 17595 ms. Notice the weird temp table and extra tables involved in the slow query. It seems clear that having a second file group is making SQL Server batty with choosing an execution plan. What the heck is going on?

    Read the article

  • Including email, IMs, configs, etc. in documentation or notes

    - by Jason Antman
    The shop I work in is pretty laid-back. We're on a documentation kick, only because historically we've been very bad with it. We do a lot of our brainstorming in face-to-face meetings, and also do a lot of communication via IM in addition to email. While I'm usually pretty good about documentation and keeping copious lab notes, I just finished a build of a host and spent hours searching through IMs, emails, files on my workstation, etc. to pull out anything I missed in my lab notes, which formed a large amount of the basis for the internal documentation. Does anyone have any thoughts on, aside from manually saving things to a project directory, managing various data sources (especially email and IM) and tracking them on project basis? Ideally, I'd like an easy way to put copies of emails, IM logs, etc. into a project-specific directory on my workstation and then just have a cron job that syncs that up with a shared folder. This isn't really a candidate for anything more advanced, as the bulk of the data will be copies of configs, code, etc. Here are the big restrictions: Email is via a centralized Zimbra install, so nothing can happen server-side. My workstation is Linux. Aside from writing Pidgin and Thunderbird plugins that let me tag chats and emails as belonging to a project, and then copy them to the appropriate place... any thoughts? Suggestions? Thanks, Jason

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • Why does running "$ sudo chmod -R 664 . " cause me to get access denied on all affected directories?

    - by Codemonkey
    I have a project folder which has messy permissions on all files. I've had the bad tendency of setting everything to octal permissions 777 because it solved all non security related issues. Then FTP uploads, files created by text editors etc. has their own set of permissions making everything a mess. I've decided to take myself together and start using the permissions the way they were meant to be used. I figured 664 was a good default for all my files and folders, and I'd just remove permissions for others on private files, and add +x for executable files. The second I changed my project folder to 664 however: $ sudo chmod -R 664 . $ ls ls: cannot open directory .: Permission denied Which makes no sense to me. I have read/write permissions, and I'm the owner of the project folder. The leftmost part of ls -l in my project folder looks like this: -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 3 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 4 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... I assume this has something to do with the permissions on the directories, but what?

    Read the article

  • Django apache + mod_wsgi with virtualenv

    - by ArgsKwargs
    I have some questions running multiple Django sites on a VPS I have a server that uses openPanel to automatically create VirtualHosts within apache2. My ideal situation is that I would have multiple virtualenvs with different dependencies installed so the python dist-packages directory isn't contaminated for different Django sites. For example: /home/user/virtualenv1 /home/user/virtualenv2 My django applications reside at /var/www, so For example: /var/www/djangosite1 /var/www/djangosite2 Now I've read upon openPanel docs and figured out the best thing todo is create a django.conf file inside the mydomain.com.inc folder, which looks something like: /etc/apache2/openpanel.d/mydomain.com.inc/django.conf DocumentRoot /var/www/djangosite1/project WSGIScriptAlias / /var/www/djangosite1/project/wsgi.py WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages <Directory /var/www/djangosite1/project> Order allow,deny Allow from all </Directory> Alias /static /var/www/djangosite1/project/static-root Now my problem is that this setup seems unable to find the virtualenv site-packages thus not recognizing any dependencies available in the given virtualenv Also, commenting out this line doesn't seem to break or change a thing: WSGIDaemonProcess mydomain python-path=/home/user/virtualenv1/lib/python2.6/site-packages For example: > service apache2 start ImportError: No module named South When I install South outside the virtualenv everything works

    Read the article

  • Compile error C++: could not deduce template argument for 'T'

    - by OneShot
    I'm trying to read binary data to load structs back into memory so I can edit them and save them back to the .dat file. readVector() attempts to read the file, and return the vectors that were serialized. But i'm getting this compile error when I try and run it. What am I doing wrong with my templates? ***** EDIT ************** Code: // Project 5.cpp : main project file. #include "stdafx.h" #include <iostream> #include <fstream> #include <string> #include <vector> #include <algorithm> using namespace System; using namespace std; #pragma hdrstop int checkCommand (string line); template<typename T> void writeVector(ofstream &out, const vector<T> &vec); template<typename T> vector<T> readVector(ifstream &in); struct InventoryItem { string Item; string Description; int Quantity; int wholesaleCost; int retailCost; int dateAdded; } ; int main(void) { cout << "Welcome to the Inventory Manager extreme! [Version 1.0]" << endl; ifstream in("data.dat"); vector<InventoryItem> structList; readVector<InventoryItem>( in ); while (1) { string line = ""; cout << endl; cout << "Commands: " << endl; cout << "1: Add a new record " << endl; cout << "2: Display a record " << endl; cout << "3: Edit a current record " << endl; cout << "4: Exit the program " << endl; cout << endl; cout << "Enter a command 1-4: "; getline(cin , line); int rValue = checkCommand(line); if (rValue == 1) { cout << "You've entered a invalid command! Try Again." << endl; } else if (rValue == 2){ cout << "Error calling command!" << endl; } else if (!rValue) { break; } } system("pause"); return 0; } int checkCommand (string line) { int intReturn = atoi(line.c_str()); int status = 3; switch (intReturn) { case 1: break; case 2: break; case 3: break; case 4: status = 0; break; default: status = 1; break; } return status; } template<typename T> void writeVector(ofstream &out, const vector<T> &vec) { out << vec.size(); for(vector<T>::const_iterator i = vec.begin(); i != vec.end(); i++) { out << *i; } } ostream& operator<<(std::ostream &strm, const InventoryItem &i) { return strm << i.Item << " (" << i.Description << ")"; } template<typename T> vector<T> readVector(ifstream &in) { size_t size; in >> size; vector<T> vec; vec.reserve(size); for(int i = 0; i < size; i++) { T tmp; in >> tmp; vec.push_back(tmp); } return vec; } Compiler errors: 1>------ Build started: Project: Project 5, Configuration: Debug Win32 ------ 1>Compiling... 1>Project 5.cpp 1>.\Project 5.cpp(124) : warning C4018: '<' : signed/unsigned mismatch 1> .\Project 5.cpp(40) : see reference to function template instantiation 'std::vector<_Ty> readVector<InventoryItem>(std::ifstream &)' being compiled 1> with 1> [ 1> _Ty=InventoryItem 1> ] 1>.\Project 5.cpp(127) : error C2679: binary '>>' : no operator found which takes a right-hand operand of type 'InventoryItem' (or there is no acceptable conversion) 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1144): could be 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,signed char *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1146): or 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,signed char &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1148): or 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,unsigned char *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1150): or 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,unsigned char &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(155): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::basic_istream<_Elem,_Traits> &(__cdecl *)(std::basic_istream<_Elem,_Traits> &))' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(161): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::basic_ios<_Elem,_Traits> &(__cdecl *)(std::basic_ios<_Elem,_Traits> &))' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(168): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::ios_base &(__cdecl *)(std::ios_base &))' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(175): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::_Bool &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(194): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(short &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(228): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(unsigned short &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(247): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(int &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(273): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(unsigned int &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(291): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(long &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(309): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(__w64 unsigned long &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(329): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(__int64 &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(348): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(unsigned __int64 &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(367): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(float &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(386): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(double &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(404): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(long double &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(422): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(void *&)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(441): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::basic_streambuf<_Elem,_Traits> *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> while trying to match the argument list '(std::ifstream, InventoryItem)' 1>Build log was saved at "file://c:\Users\Owner\Documents\Visual Studio 2008\Projects\Project 5\Project 5\Debug\BuildLog.htm" 1>Project 5 - 1 error(s), 1 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Oh my god...I fixed that error I think and now I got another one. Will you PLEASE just help me on this one too! What the heck does this mean ??

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Building a database installer with WiX, datadude and Visual Studio 2010

    - by jamiet
    Today I have been using Windows Installer XML (WiX) to build an installer (.msi file) that would install a SQL Server database on a server of my choosing; the source code for that database lives in datadude (a tool which you may know by one of quite a few other names). The basis for this work was a most excellent blog post by Duke Kamstra entitled Implementing a WIX installer that calls the GDR version of VSDBCMD.EXE which coves the delicate intricacies of doing this – particularly how to call Vsdbcmd.exe in a CustomAction. Unfortunately there are a couple of things wrong with Duke’s post: Searching for “datadude wix” didn’t turn it up in the first page of search results and hence it took me a long time to find it. And I knew that it existed. If someone else were after a post on using WiX with datadude its likely that they would never have come across Duke’s post and that would be a great shame because its the definitive post on the matter. It was written in October 2009 and had not been updated for Visual Studio 2010. Well, this blog post is an attempt to solve those problems. Hopefully I’ve solved the first one just by following a few of my blogging SEO tips while writing this blog post, in the rest of it I will explain how I took Duke’s code and updated it to work in Visual Studio 2010. If you need to build a database installer using WiX, datadude and Visual Studio 2010 then you still need to follow Duke’s blog post so go and do that now. Below are the amendments that I made that enabled the project to get built in Visual Studio 2010: In VS2010 datadude’s output files have changed from being called Database.<suffix> to <ProjectName>_Database.<suffix>. Duke’s code was referencing the old file name formats. Duke used $(var.SolutionDir) and relative paths to point to datadude artefacts I have replaced these with Votive Project References http://wix.sourceforge.net/manual-wix3/votive_project_references.htm I commented out all references to MicrosoftSqlTypesDbschema in DatabaseArtifacts.wxi. I don't think this is produced in VS2010 (I may be wrong about that but it wasn't in the output from my project) Similarly I commented out component MicrosoftSqlTypesDbschema in VsdbcmdArtifacts.wxi. It wasn't where Duke's code said it should have been so am assuming/hoping it isn't needed. Duke's ?define block to work out appropriate SrcArchPath actually wasn't working for me (i.e. <?if $(var.Platform)=x64 ?> was evaluating to false)  so I just took out the conditional stuff and declared the path explicitly to the “Program Files (x86)” path. The old code is still there though if you need to put it back. None of the <RegistrySearch> stuff is needed for VS2010 - so I commented it all out! Changed to use /manifest option rather than /model option on vsdbcmd.exe command-line. Personal preference is all! Added a new component in order to bundle along the vsdbcmd.exe.config file Made the install of the Custom Action dependent on the relevant feature being selected for install. This one is actually really important – deselecting the database feature for installation does not, by default, stop the CustomAction from executing and so would cause an error - so that scenario needs to be catered for I have made my amended solution available for download at: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.zip It contains two projects: the WiX project and the datadude project that is the source to be deployed (for demo purposes it only contains one table). I have also made the .msi available although in order that it gets through file blockers I changed the name from InstallMyDatabase.msi to InstallMyDatabase.ms_ – simply rename the file back once you have downloaded it from: http://cid-550f681dad532637.office.live.com/self.aspx/Public/BlogShare/20110210/InstallMyDatabase.ms%5E_ .You can try it out for yourself – the only thing it does is dump the files into %Program Files%\MyDatabase and uses them to install a database onto a server of your choosing with a name of your choosing - no damaging side-affects. I will caveat this by saying “it works on my machine” and, not having access to a plethora of different machines, I haven’t tested it anywhere else. One potential issue that I know of is that Vsdbcmd.exe has a dependency on SQL Server CE although if you have SQL Server tools or Visual Studio installed you should be fine. Unfortunately its not possible to bundle along the SQL Server CE installer in the .msi because Windows will not allow you to call one installer from inside another – the recommended way to get around this problem is to build a bootstrapper to bundle the whole lot together but doing that is outside the scope of this blog post. If you discover any other issues then please let me know. Here are the screenshots from the installer: And once installed…. Hope this is useful! @jamiet 

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

  • Enterprise Process Maps: A Process Picture worth a Million Words

    - by raul.goycoolea
    p { margin-bottom: 0.08in; }h1 { margin-top: 0.33in; margin-bottom: 0in; color: rgb(54, 95, 145); page-break-inside: avoid; }h1.western { font-family: "Cambria",serif; font-size: 14pt; }h1.cjk { font-family: "DejaVu Sans"; font-size: 14pt; }h1.ctl { font-size: 14pt; } Getting Started with Business Transformations A well-known proverb states that "A picture is worth a thousand words." In relation to Business Process Management (BPM), a credible analyst might have a few questions. What if the picture was taken from some particular angle, like directly overhead? What if it was taken from only an inch away or a mile away? What if the photographer did not focus the camera correctly? Does the value of the picture depend on who is looking at it? Enterprise Process Maps are analogous in this sense of relative value. Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment, from the biggest picture representations down to the lowest level required or desired for the particular project type, scope and objectives. The Enterprise Process Map serves as an entry point for the process architecture and is defined: the single highest level of process mapping for an organization. It is constructed and evaluated during the Strategy Phase of the Business Process Management Lifecycle. (see Figure 1) Fig. 1: Business Process Management Lifecycle Many organizations view such maps as visual abstractions, constructed for the single purpose of process categorization. This, in turn, results in a lesser focus on the inherent intricacies of the Enterprise Process view, which are explored in the course of this paper. With the main focus of a large scale process documentation effort usually underlying an ERP or other system implementation, it is common for the work to be driven by the desire to "get to the details," and to the type of modeling that will derive near-term tangible results. For instance, a project in American Pharmaceutical Company X is driven by the Director of IT. With 120+ systems in place, and a lack of standardized processes across the United States, he and the VP of IT have decided to embark on a long-term ERP implementation. At the forethought of both are questions, such as: How does my application architecture map to the business? What are each application's functionalities, and where do the business processes utilize them? Where can we retire legacy systems? Well-developed BPM methodologies prescribe numerous model types to capture such information and allow for thorough analysis in these areas. Process to application maps, Event Driven Process Chains, etc. provide this level of detail and facilitate the completion of such project-specific questions. These models and such analysis are appropriately carried out at a relatively low level of process detail. (see figure 2) Fig. 2: The Level Concept, Generic Process HierarchySome of the questions remaining are ones of documentation longevity, the continuation of BPM practice in the organization, process governance and ownership, process transparency and clarity in business process objectives and strategy. The Level Concept in Brief Figure 2 shows a generic, four-level process hierarchy depicting the breakdown of a "Process Area" into progressively more detailed process classifications. The number of levels and the names of these levels are flexible, and can be fit to the standards of the organization's chosen terminology or any other chosen reference model that makes logical sense for both short and long term process description. It is at Level 1 (in this case the Process Area level), that the Enterprise Process Map is created. This map and its contained objects become the foundation for a top-down approach to subsequent mapping, object relationship development, and analysis of the organization's processes and its supporting infrastructure. Additionally, this picture serves as a communication device, at an executive level, describing the design of the business in its service to a customer. It seems, then, imperative that the process development effort, and this map, start off on the right foot. Figuring out just what that right foot is, however, is critical and trend-setting in an evolving organization. Key Considerations Enterprise Process Maps are usually not as living and breathing as other process maps. Just as it would be an extremely difficult task to change the foundation of the Sears Tower or a city plan for the entire city of Chicago, the Enterprise Process view of an organization usually remains unchanged once developed (unless, of course, an organization is at a stage where it is capable of true, high-level process innovation). Regardless, the Enterprise Process map is a key first step, and one that must be taken in a precise way. What makes this groundwork solid depends on not only the materials used to construct it (process areas), but also the layout plan and knowledge base of what will be built (the entire process architecture). It seems reasonable that care and consideration are required to create this critical high level map... but what are the important factors? Does the process modeler need to worry about how many process areas there are? About who is looking at it? Should he only use the color pink because it's his boss' favorite color? Interestingly, and perhaps surprisingly, these are all valid considerations that may just require a bit of structure. Below are Three Key Factors to consider when building an Enterprise Process Map: Company Strategic Focus Process Categorization: Customer is Core End-to-end versus Functional Processes Company Strategic Focus As mentioned above, the Enterprise Process Map is created during the Strategy Phase of the Business Process Management Lifecycle. From Oracle Business Process Management methodology for business transformation, it is apparent that business processes exist for the purpose of achieving the strategic objectives of an organization. In a prescribed, top-down approach to process development, it must be ensured that each process fulfills its objectives, and in an aggregated manner, drives fulfillment of the strategic objectives of the company, whether for particular business segments or in a broader sense. This is a crucial point, as the strategic messages of the company must therefore resound in its process maps, in particular one that spans the processes of the complete business: the Enterprise Process Map. One simple example from Company X is shown below (see figure 3). Fig. 3: Company X Enterprise Process Map In reviewing Company X's Enterprise Process Map, one can immediately begin to understand the general strategic mindset of the organization. It shows that Company X is focused on its customers, defining 10 of its process areas belonging to customer-focused categories. Additionally, the organization views these end-customer-oriented process areas as part of customer-fulfilling value chains, while support process areas do not provide as much contiguous value. However, by including both support and strategic process categorizations, it becomes apparent that all processes are considered vital to the success of the customer-oriented focus processes. Below is an example from Company Y (see figure 4). Fig. 4: Company Y Enterprise Process Map Company Y, although also a customer-oriented company, sends a differently focused message with its depiction of the Enterprise Process Map. Along the top of the map is the company's product tree, overarching the process areas, which when executed deliver the products themselves. This indicates one strategic objective of excellence in product quality. Additionally, the view represents a less linear value chain, with strong overlaps of the various process areas. Marketing and quality management are seen as a key support processes, as they span the process lifecycle. Often, companies may incorporate graphics, logos and symbols representing customers and suppliers, and other objects to truly send the strategic message to the business. Other times, Enterprise Process Maps may show high level of responsibility to organizational units, or the application types that support the process areas. It is possible that hundreds of formats and focuses can be applied to an Enterprise Process Map. What is of vital importance, however, is which formats and focuses are chosen to truly represent the direction of the company, and serve as a driver for focusing the business on the strategic objectives set forth in that right. Process Categorization: Customer is Core In the previous two examples, processes were grouped using differing categories and techniques. Company X showed one support and three customer process categorizations using encompassing chevron objects; Customer Y achieved a less distinct categorization using a gradual color scheme. Either way, and in general, modeling of the process areas becomes even more valuable and easily understood within the context of business categorization, be it strategic or otherwise. But how one categorizes their processes is typically more complex than simply choosing object shapes and colors. Previously, it was stated that the ideal is a prescribed top-down approach to developing processes, to make certain linkages all the way back up to corporate strategy. But what about external influences? What forces push and pull corporate strategy? Industry maturity, product lifecycle, market profitability, competition, etc. can all drive the critical success factors of a particular business segment, or the company as a whole, in addition to previous corporate strategy. This may seem to be turning into a discussion of theory, but that is far from the case. In fact, in years of recent study and evolution of the way businesses operate, cross-industry and across the globe, one invariable has surfaced with such strength to make it undeniable in the game plan of any strategy fit for survival. That constant is the customer. Many of a company's critical success factors, in any business segment, relate to the customer: customer retention, satisfaction, loyalty, etc. Businesses serve customers, and so do a business's processes, mapped or unmapped. The most effective way to categorize processes is in a manner that visualizes convergence to what is core for a company. It is the value chain, beginning with the customer in mind, and ending with the fulfillment of that customer, that becomes the core or the centerpiece of the Enterprise Process Map. (See figure 5) Fig. 5: Company Z Enterprise Process Map Company Z has what may be viewed as several different perspectives or "cuts" baked into their Enterprise Process Map. It has divided its processes into three main categories (top, middle, and bottom) of Management Processes, the Core Value Chain and Supporting Processes. The Core category begins with Corporate Marketing (which contains the activities of beginning to engage customers) and ends with Customer Service Management. Within the value chain, this company has divided into the focus areas of their two primary business lines, Foods and Beverages. Does this mean that areas, such as Strategy, Information Management or Project Management are not as important as those in the Core category? No! In some cases, though, depending on the organization's understanding of high-level BPM concepts, use of category names, such as "Core," "Management" or "Support," can be a touchy subject. What is important to understand, is that no matter the nomenclature chosen, the Core processes are those that drive directly to customer value, Support processes are those which make the Core processes possible to execute, and Management Processes are those which steer and influence the Core. Some common terms for these three basic categorizations are Core, Customer Fulfillment, Customer Relationship Management, Governing, Controlling, Enabling, Support, etc. End-to-end versus Functional Processes Every high and low level of process: function, task, activity, process/work step (whatever an organization calls it), should add value to the flow of business in an organization. Suppose that within the process "Deliver package," there is a documented task titled "Stop for ice cream." It doesn't take a process expert to deduce the room for improvement. Though stopping for ice cream may create gain for the one person performing it, it likely benefits neither the organization nor, more importantly, the customer. In most cases, "Stop for ice cream" wouldn't make it past the first pass of To-Be process development. What would make the cut, however, would be a flow of tasks that, each having their own value add, build up to greater and greater levels of process objective. In this case, those tasks would combine to achieve a status of "package delivered." Figure 3 shows a simple example: Just as the package can only be delivered (outcome of the process) without first being retrieved, loaded, and the travel destination reached (outcomes of the process steps), some higher level of process "Play Practical Joke" (e.g., main process or process area) cannot be completed until a package is delivered. It seems that isolated or functionally separated processes, such as "Deliver Package" (shown in Figure 6), are necessary, but are always part of a bigger value chain. Each of these individual processes must be analyzed within the context of that value chain in order to ensure successful end-to-end process performance. For example, this company's "Create Joke Package" process could be operating flawlessly and efficiently, but if a joke is never developed, it cannot be created, so the end-to-end process breaks. Fig. 6: End to End Process Construction That being recognized, it is clear that processes must be viewed as end-to-end, customer-to-customer, and in the context of company strategy. But as can also be seen from the previous example, these vital end-to-end processes cannot be built without the functionally oriented building blocks. Without one, the other cannot be had, or at least not in a complete and organized fashion. As it turns out, but not discussed in depth here, the process modeling effort, BPM organizational development, and comprehensive coverage cannot be fully realized without a semi-functional, process-oriented approach. Then, an Enterprise Process Map should be concerned with both views, the building blocks, and access points to the business-critical end-to-end processes, which they construct. Without the functional building blocks, all streams of work needed for any business transformation would be lost mess of process disorganization. End-to-end views are essential for utilization in optimization in context, understanding customer impacts, base-lining all project phases and aligning objectives. Including both views on an Enterprise Process Map allows management to understand the functional orientation of the company's processes, while still providing access to end-to-end processes, which are most valuable to them. (See figures 7 and 8). Fig. 7: Simplified Enterprise Process Map with end-to-end Access Point The above examples show two unique ways to achieve a successful Enterprise Process Map. The first example is a simple map that shows a high level set of process areas and a separate section with the end-to-end processes of concern for the organization. This particular map is filtered to show just one vital end-to-end process for a project-specific focus. Fig. 8: Detailed Enterprise Process Map showing connected Functional Processes The second example shows a more complex arrangement and categorization of functional processes (the names of each process area has been removed). The end-to-end perspective is achieved at this level through the connections (interfaces at lower levels) between these functional process areas. An important point to note is that the organization of these two views of the Enterprise Process Map is dependent, in large part, on the orientation of its audience, and the complexity of the landscape at the highest level. If both are not apparent, the Enterprise Process Map is missing an opportunity to serve as a holistic, high-level view. Conclusion In the world of BPM, and specifically regarding Enterprise Process Maps, a picture can be worth as many words as the thought and effort that is put into it. Enterprise Process Maps alone cannot change an organization, but they serve more purposes than initially meet the eye, and therefore must be designed in a way that enables a BPM mindset, business process understanding and business transformation efforts. Every Enterprise Process Map will and should be different when looking across organizations. Its design will be driven by company strategy, a level of customer focus, and functional versus end-to-end orientations. This high-level description of the considerations of the Enterprise Process Maps is not a prescriptive "how to" guide. However, a company attempting to create one may not have the practical BPM experience to truly explore its options or impacts to the coming work of business process transformation. The biggest takeaway is that process modeling, at all levels, is a science and an art, and art is open to interpretation. It is critical that the modeler of the highest level of process mapping be a cognoscente of the message he is delivering and the factors at hand. Without sufficient focus on the design of the Enterprise Process Map, an entire BPM effort may suffer. For additional information please check: Oracle Business Process Management.

    Read the article

  • CodePlex Daily Summary for Tuesday, December 07, 2010

    CodePlex Daily Summary for Tuesday, December 07, 2010Popular ReleasesMy Web Pages Starter Kit: 1.3.1 Production Release (Security HOTFIX): Due to a critical security issue, it's strongly advised to update the My Web Pages Starter Kit to this version. Possible attackers could misuse the image upload to transmit any type of file to the website. If you already have a running version of My Web Pages Starter Kit 1.3.0, you can just replace the ftb.imagegallery.aspx file in the root directory with the one attached to this release.ASP.NET MVC Project Awesome (jQuery Ajax helpers): 1.4: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager new stuff: popup WhiteSpaceFilterAttribute tested on mozilla, safari, chrome, opera, ie 9b/8/7/6nopCommerce. ASP.NET open source shopping cart: nopCommerce 1.90: To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).Aura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...New ProjectsAcorn: Little acorns lead to mighty oaks.Algorithmia: Algorithm and data-structure library for .NET 3.5 and up. Algorithmia contains sophisticated algorithms and data-structures like graphs, priority queues, command, undo-redo and more. Base Station Verification system: Base Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station Verification systemBase Station VerificatioBlueAd: Simple app to broadcast messages to bluetooth enabled devicesBuiltWith Fiddler Integration: Project Description BuiltWithFiddler adds BuildWith functionality to the HTTP Debugging Proxy Fiddler. It helps to determine the underlying technologies used in HTTP responses. www.builtwith.com www.fiddler2.com It is written in C# by Andy at Bare Web BVCMS.app: The Bellevue Church Management System is a complete Web-based application for managing your church. This iPhone app provides tools to connect to bvcms so that users can search, check-in members, and other actions.coffeeGreet: CoffeeGreet is a WordPress plug-in that will greet your visitors with coffee depending on the hour of the day, by displaying images using the Flickr API.DCEL data structure: Doubly-connected edge list data structure implementation in C#.El Bruno ClickOnce Demo: Demo de ClickOnce en CodePlexFiren's Laboratory: NothingFunCam: A fun application for playing with your webcam. Experiment with different overlays and exciting effects. Save the images when you want, or on a timer. Great fun for parties! (WPF/C#) Uses WPF Media Kit for webcam integration, and Shazzam for the great shader effects.GammaJul LgLcd: A .NET wrapper around the Logitech SDK for G15/G19 keyboard screens. Supports raw byte sending, GDI+ drawing and rendering WPF elements onto the screen.Getting Started CodePlex: This is a demo for using TFS in CodePlexGPUG (Dynamics GP User Group): The location for GPUG members to share code.HPMC: DemoImageOfMeLocator: Team Boarders Platform: WordPress Objectives: 1. Create a plugin for WordPress. 2. Create a plugin that allows users to browse images uploaded on their Flickr Account and use them as overlays for store locations on a large map. 3. Create a plugjDepot: jQuery ajax, jquery UI and ASP.NET MVC based online store application. This software will let a user manage their product inventory by exposiing CRUD operations through the UI. Customers can buy these products and track each shipment separately. It is developed in C#.JQuery Cycle Carousel for DotNetNuke®: DNN Module JQuery Cycle Carousel This module will show images as a carousel using the cycle JQuery plugin. You can easyly change Cycle effect and other settings in the module.Local Movie DB in C#: C# WPF project. Will create local movie database where users can create their own DB of the movies they own/seen/liked ... etcLocation Framework for Windows Phone 7 and Windows Azure: A framework to build location based applications with Windows Phone 7 and Windows Azure.OraLibs: Collection of useful PL/SQL procedures, which contain methods for working with arrays, strings, numbers, dates.Phyo: License managementRepositório de Monografias: O Repositório de Monografias terá como função: - Salvar em um repositório todas as monografias postadas no período pelos os alunos da FACISA/FCM/ESAC. - O administrador do sistema, fará uma avaliação de acordo com ABNT e retornará para o aluno as nescessárias correções.Secure SharePoint Silverlight Web Part - Silverlight Security & Auditing: The Secure Silverlight WebPart provides both builtin security using default SharePoint security mechanisms as well as site collection specific auditing to record an event a Silverlight file is newly hosted in the SharePoint environment. SilverlightColorPicker: Photoshop like ColorPicker built in silverlight from scratchSparrow.Net Connect: This is a passport system.Sparrow.NET TaskMe: TaskMe is a project management web application.Written using Sparrow.Net frameworkSQLiteWrapper: A light c# wrapper around the sqlite library's functionsSuperMarioBros.Net: A .Net Super Mario Bros clon.Virtualizing Tree View: Tree View for large amount of itemsWindows Forms GUI based Trace Listener: Gives a simple UI based Trace Listener to debug / Trace information . No need to look at EventLog / Xml file etc. This code Library helps you View the Trace and debug entries. Can plug in to your WinForms App as well.WP Socially Related: Automatically include related posts from Twitter, WordPress.com and Bing Search into each of your blog posts

    Read the article

  • CodePlex Daily Summary for Tuesday, May 18, 2010

    CodePlex Daily Summary for Tuesday, May 18, 2010New ProjectsCafeControl: Supports Remote LOGIN,Remote logout ,Account Creation ,Account LOGOUT ,Temporary Login ,SMS Reported and many other features Requires .net 4.0 Cloud & Contacts: Cloud Contacts makes it easier for share your contacts.Cow connect: Ziel des Projektes Cow connect, ist es ein Tool zu schrieben das Verschiedene Datenbanken, unterschiedlicher Herdenmanagement Tool z.b: Helm Multik...DNN Simple Blog: A simplified blog module that adheres to the DNN API and is designed for a single blog author per module instance. The module also makes use of Web...dotSpatial: dotSpatial is an open source project focused on developing a core set of GIS and mapping libraries that live together harmoniously in the System.Sp...Dynamic Survey Forms - SharePoint Web Part: Create manage dynamic survey forms as SharePoint web part. Record survey score for logged user or for someone else. This project has been designed ...EDXL Sharp: EDXLSharp is a C# / .NET 3.5 implementation of the OASIS Emergency Data Exchange Language (EDXL) family of standards. This set of libraries can be...EPiServer CMS 6 Visual Studio Project Template for VB w/ Public Templates: This is a Visual Studio 2008 Project Template with will allow the creation of a EPiServer CMS 6 project set up as and with all code in Visual Basic...Functional Command Toolbar for Windows: A floating window with a text box for typing functional command and executing it. The tool supports .NET addin, functional scripting and other feat...GameFX: Silverlight Game Development LibraryLightweight Fluent Workflow: ObjectFlow is a lightweight workflow engine for creating & executing workflows. The fluent interface makes it easy to define and understand workf...LinqSpecs: A toolset for use the specification pattern in linq queries.Money Watch: Personal Finances management system written in C#, NHibernate and SQL express.Multi-screen Viewer: This viewer allows to open and view pdf file (presentation) on multiple screens. There is no need to see the file in fullscreen on each screen (mon...neo-tsql: set of stored procedures and functions for sql serverNHTrace: NHTrace is a tool for tracing sql commands executed by NHibernate with syntax highlighting.NQueue: NQueue provides an enterprise level work scheduling and execution framework and toolset that contains no single point of failure. Using a farm of s...Online Cash Manager: Online Cash Manager based on ASP .NET MVC2 VS 2010 RTM MVC 2POCO Bridge: Bridging Silverlight and full .NET apps.REngine - game engine in Silverlight: REngine makes it easier for game developers to develop games in Microsoft Silverlight. RunAs Launcher: RunAs Launcher is a C# utility that provides a GUI for running applications under different credentials. It works in situations where the built-in ...secs4net: SECS-II/GEM/HSMS implementation on .NET. This library provide easy way to communicate with SEMI standard compatible device.SharePoint Admin Dashboard: SharePoint Dashboard for admins. Allows lightening fast multiple server management. RDP doesn't scale. Manage 10 servers easier than 1 with i...Silver spring: saltSocial Map: Social map is a social network based on geograpghical informationTweetZone: TweetZone is new type of twitter client application include DATABASE in it, and it shows you STATS. This Application's cache makes it faster to acc...Yet Another Database Viewer: Yet Another Database Viewer is developed for a basic database view and editing so you don't have to install anything. It's developed in c#.New Releases3FD - Framework For Fast Development: Alpha 1: The first test release. There is still some bugs, but it is functional. The garbage collector is showing memory leaking that must be corrected in t...Ajax Toolkit for ASP.NET MVC: MvcAjaxToolkit gridext with ContextMenu and Tmpl: MvcAjaxToolkit gridext with ContextMenu and Tmpl gridext is a extension for flexigridASP.NET MVC Extensions: SP1 Preview: SP1 Preview ========= 1. Autofac support added. 2. Changed Windsor Adapter. IWindsorInstaller is used instead of IModule.Book Cataloger: BookCataloger1.0.7a: New Features: Author editor form prototype Improvements: .NET Framework 4.0 required Input checking improved Comment edit loads and saves text...Braintree Client Library: Braintree-2.2.0: Prevent race condition when pulling back collection results -- search results represent the state of the data at the time the query was run Renam...CassiniDev - Cassini 3.5/4.0 Developers Edition: CassiniDev 3.5.1 and 4.0.1 beta 2: Documentation New in CassiniDev v3.5.1.0/v4.0.1.0 Added .Net 4 / VS10 build. Simplified test fixtures. Un-Refactored the not-so-simple MVP pa...dotNetTips: dotNetTips.Utility 3.5 R3: This is a new release (version 3.5.0.4) compatible with .NET 3.5. Requires SP1 if using the Entity Framework extensions. This is a minor update fro...Dynamic Survey Forms - SharePoint Web Part: Dynamic Survey forms for SharePoint. Alpha 1.0.1: Alpha release. Before running installer create database from script attached in zip file. In your web.config make sure your first connection strin...Event Scavenger: Viewer version 3.2.1: Added quick filters on event source and event id dialog boxes. Collector and Admin tool unaffected.Expression Blend Samples: Blend 4 Sketch Mockups Library: This library provides a set of commonly needed controls, icons and cursors to use in SketchFlow applications. After running the installer, create ...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.3: Fluent Ribbon Control Suite 1.3(supports .NET 3.5 and .NET 4 RTM) Includes: Fluent.dll (with .pdb and .xml) Showcase Application Samples Found...Home Access Plus+: v4.2.2: Version 4.2.2 Change Log: Changes to how mycomputer handles NTFS permissions File Changes: ~/Bin/HAP.Web.dll ~/Bin/HAP.Web.pdbIdeaNMR: IdeaNMR Web App PreAlpha 0.1: This is the first release.IP Informer: Beta Release: V0.8.0.0 Beta.LinkedIn® for Windows Mobile: LinkedIn for Windows Mobile v0.9: Main updates for this release Fixed Status update. (updates where not correctly passed on to LinkedIn) Added landscape/GSensor support. (Currentl...LINQ to Twitter: LINQ to Twitter Beta v2.0.11: New items added since v1.1 include: Support for OAuth (via DotNetOpenAuth), secure communication via https, VB language support, serialization of ...LinqSpecs: Version 1.0 alpha: This is the alpha version of LinqSpecs.miniTodo: mini Todo version 0.2: ・デザインを透明主体に変更  -件数を表示している部分のドラッグでウィンドウ移動  -上記の部分右クリックで、「最前面に表示」、「全アイテム管理」 ・グラフを日/週/月単位の3種類に増やした ・新規作成、完了時にアニメーション追加。完了時にはサウンドも追加 ・テキスト未入力時は追加ボタンも非表示My Notepad: My Notepad (Beta): This is the Beta version of My Notepad. The software is stable enough for you to use. Enjoy the flexibility of docking and also the all new Syntax ...NHTrace: NHTrace-45713: NHTrace-45713Nito.KitchenSink: Version 8: New features (since Version 5) Fixed null reference bug in ExceptionExtensions. Added DynamicStaticTypeMembers and RefOutArg for dynamically (lat...Nito.LINQ: Beta (v0.5): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2521.102, released 2010-05-14. Breaking changes Corrected internal read-on...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.9: Work on UI templates for associative tables (2-column PK), using users/roles pages as an example. Added templates for Not-In-Group sql and cache-ba...patterns & practices - GAX Extensions Library: GEL for gax2010: This is the GEL for GAX 2010, support Visual Stuido 2010patterns & practices - Smart Client Guidance: Smart Client Software Factory 2010 Documentation: If the right-side pane of the chm file is not displayed correctly, do the following: 1) Download SCSF2010Guide.chm file. 2) Start the windows explo...patterns & practices - Windows Azure Guidance: WAAG - Part 1 - Release Candidate: "Release Candidate" for Part 1 of the Windows Azure Guide Highlights of this release are: Code samples complete. Fixed few bugs on "Dependency Ch...Rawr: Rawr 2.3.17: >Rawr3 Public Beta has been released! Click here for details.< - Lots of improvements to the default data files. There is a known issue with the s...RunAs Launcher: RunAs Launcher 1.2: This is the first version being released to CodePlex. Simply extract the file and run the executable. For those that wish to download the source c...Rx Contrib: V1.5: Bug fixsecs4net: Release 1.0: Notes: Runtime requirement: .Net framework 2.0 SP2 with System.Core(.NET 3.5), System.Threading(Rx for 3.5 SP1)SharePoint Admin Dashboard: SPDashboard v1.0: SPDashboard v1.0ShortURL Creator: ShortURL Creator 1.3.0.0: Added new provider u.nu and minimum UI changesStyleCop+: StyleCop+ 0.7: StyleCop+ is now fully compatible with StyleCop 4.4. The following entities were supported in Advanced Naming Rules: - Delegate - Event - Property...Value Injecter: an aspect oriented mapper: Value Injecter 1.2: ValueInjecter library, Sample solution that contains: web-forms sample project win-forms sample project unit tests samplesVCC: Latest build, v2.1.30517.0: Automatic drop of latest buildVCC: Latest build, v2.1.30517.1: Automatic drop of latest buildVidCoder: 0.4.1: Changes: Marks system as "working" to prevent computer from sleeping during an encode. CPU priority changed to BelowNormal during encodes. Enco...WSP Listener: WSP Listener version 2.0.0.0: This new version includes: All assemblies and required assets in one WSP Seperated code in library assembly Activate the WSP Listener with one...Yet Another Database Viewer: Beta: first release of the programYet another developer blog - Examples: Asynchronous TreeView in ASP.NET WebForms: This sample application shows how to use jQuery TreeView plugin for creating an asynchronous TreeView in ASP.NET WebForms. This application is acco...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryPHPExcelRawrBlogEngine.NETMicrosoft Biology FoundationCustomer Portal Accelerator for Microsoft Dynamics CRMWindows Azure Command-line Tools for PHP DevelopersGMap.NET - Great Maps for Windows Forms & PresentationCassiniDev - Cassini 3.5/4.0 Developers EditionDotNetZip Library

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • CodePlex Daily Summary for Tuesday, June 08, 2010

    CodePlex Daily Summary for Tuesday, June 08, 2010New ProjectsAD CMS: CMS software project still in its initial development and design stage.Animated panel: Animated Panel is a WPF control that supports animation of its content on resize. Can be used in item controls (ListBox for example) as ItemsPanelT...Anurag Pallaprolu's Code Repository: Hi there, this is Anurag P.'s public repository which contains most of c++ language examples and many command line(only) applications. Well , plea...atfas: atfasBibleNotes: A small application that uses BibleGateway to lookup scripture and add notes to themCarRental: How to Rent A Car.Food Innovation: This is the Food Innovation project.Generic Validation.NET: Generic Validation.NET is a flexible lightweight validation library for .NET, that can be used by any .NET project: ASP.NET Web Forms, ASP.NET MVC,...Komoi: Komoi is an app that will bring on a new form of web comic delivery.Liekhus Entity Framework XAF Extensions: Entity Framework extensions to support DevExpress eXpress Application Framework (XAF) code generation by Patrick Liekhus.Marketing: Desing Automation Marketing FlowMediaCoder.NET: MediaCoder.NET makes it easy for normal PC users to convert media files to other formats. It is developed in Visual Basic.NETMemetic NPC Behavior Toolkit: This is a library based on the NeverWinter Nights' Memetic AI Toolkit by William Bull. This is an attempt at creating a C# edition of this brillia...MeVisLab QT VR Export: -Mudbox: This project for personal test. Prog2: wi ss10 2010 PSAdmin: PSAdmin is a web based administration tool that allows the easy execution of Windows PowerShell scripts within your environment.RIA Services Essentials: The RIA Services Essentials project contains sample applications/extensions demonstrating using and extending WCF RIA Services v1.SCSM Service Request: The Service Request project defines a new work item class called 'Service Request' and the corresponding form for that work item class. It is a go...SFTP Component for .NET CSharp, VB.NET, and ASP.NET: The Ultimate SSH Secure File Transfer (SFTP) .NET Component offers a comprehensive interface for SFTP, enabling you to quickly and easily incorpora...shitcore: Application demonstrating how to turn a crappy application into something useful. Read more about the refactoring in http://blog.gauffin.com (sear...SystemCentered Operations Manager Reporting: SystemCentered Reporting give Microsoft System Center Operations Manager administrators an extended set of performance reports aimed towards all us...TokyoTyrantClient: makes it easier for c# developer to write code to connect the tokyo tyrant. it support: 1.utf-8 encode 2.tcpClient pool 3.rich setting about tc...Ultimate FTP Component for .NET C#, VB.NET and ASP.NET: Ultimate FTP is a 100%-managed .NET class library that adds powerful and comprehensive File Transfer capabilities to your .NET applications. WCF 4 Templates for Visual Studio 2010: WCF 4 templates for Visual Studio 2010 providing a scenario-driven starting point.XCube: XCube is a basic command line interface, with support for files, user accounts(only in the GUI), and variables(only in DevMode). It is developed in...New ReleasesAdd-ons for EPiServer Relate+: EPiXternal.RelatePlus.Properties 0.1.0.0 Alpha: This is the Alpha release of EPiXternal.RelatePlus.Properties. The download is in the form of an .epimodule file that you can install with EPiServe...Add-ons for EPiServer Relate+: EPiXternal.RelatePlus.WebControl 0.1.0.0 Alpha: This is the Alpha release of EPiXternal.RelatePlus.WebControls. The download is in the form of an .epimodule file that you can install with EPiServ...Animated panel: AnimatedPanel v1: First version of AnimatedPanel.Anurag Pallaprolu's Code Repository: C.L.O.S.E - V3: C.L.O.S.E - V3 Smaller than ever. More Useful than ever Run only CLOSEV3.exeAnurag Pallaprolu's Code Repository: FLTK - 1.3.X: The Fast Lightining Tool Kit is back. This is the FLTK 1.3.X Tar ballAnurag Pallaprolu's Code Repository: KBHIT Function Code: This is a sample to teach about kbhit()Browser Gallese: Browser 1.0.0.15: Continua l'era del browser opensource con una novità:ho impiantato il P2P online. Adesso il browser ha bisogno di Java. Se non lo avete,cliccate qu...CC.Hearts Screen Saver: CC.Hearts Screen Saver 1.0.10.607: This is the initial release of CC.Hearts Screen Saver. Marking as stable but limited testing at this point so feedback is greatly welcomed.Community Forums NNTP bridge: Community Forums NNTP Bridge V32: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...DotNetNuke® Skins: Default.css (beta): About The team has put together a cleaned up and optimized default.css file as a first step in moving toward more efficient CSS usage. Ideally, th...Dynamic Survey Forms - SharePoint Web Part: Code Fix 06-07-2010: Fix for editing existing forms Add: Requiered field option Add: Requiered field validation on submitFloe IRC Client: Floe 1.0 (2010-06): NOTE: You may have to uninstall your existing version for this installer to work properly. - Added /QUOTE command - Fixed bug where new message in...Folder Bookmarks: Folder Bookmarks 1.6.2.1: The latest version of Folder Bookmarks (1.6.2.1), with new Mini-Menu UI changes (1.3). Once you have extracted the file, do not delete any files/f...Food Innovation: Food Innovation 1.0: This is the V1.0 release.HERB.IQ: Alpha 0.1 Source code release 7: Alpha 0.1 Source code release 7 (skipped uploading 6)Liekhus Entity Framework XAF Extensions: Version 1.1.0: Initial project release. Ported the XAFDSL tool into the Entity Framework and made it work with the Visual Studio 2010 extensions.LogikBug's IoC Container: LogikBug's IoC Container v 1.1: In this release, I add the ability to extend the container using the LogikBug.Injection.Extensibility namespace.MediaCoder.NET: MediaCoder.NET v1.0 beta Source Code: Source code for MediaCoder.NET v1.0 beta it includes everything - also the installer.Memetic NPC Behavior Toolkit: Wandering Meme Test: This was a code spike to see the first custom meme in action. The first meme chosen was the "Wander" meme. This is a Visual Studio 2010 solution. ...Microsoft Silverlight Media Framework: Silverlight Media Framework v2 (RC1): This is the first release candidate for the Microsoft Silverlight Media Framework v2. Note: The IIS Smooth Streaming Player Development Kit assem...mwNSPECT: mwNSPECT Beta: mwNSPECT Mapwindow plugin dll. Place in your MapWindow or BASINS plugins directory. Presently for testing everything, though very much known issu...mwNSPECT: mwNSPECT Beta Installer: Simplistic mwNSPECT Mapwindow plugin installer using Inno setup. Installs all the files you'll need for NSPECT into the C:\NSPECT folder and insta...Near forums - ASP.NET MVC forum engine: Release 1: First release of the SEO friendly ASP.NET MVC forum engine.NLog - Advanced .NET Logging: Nightly Build 2010.06.07.001: Changes since the last build:2010-06-06 22:13:02 Jarek Kowalski Added unit tests for common target behaviors. 2010-06-06 19:36:44 Jarek Kowalski c...patterns & practices – Enterprise Library: Enterprise Library 5.0 - Dev Guide (RC): This is a Release Candidate of the Developer's Guide, C# EditionPSAdmin: 1.0.0.0: This is an alpha release of PSAdmin and should be tested before putting into a production environment. This package is pre-compiled and ready for ...Refix - .NET dependency management: Refix v0.1.0.59 ALPHA: Still a very early version. Functional changes: Added new pre (prebuild) and fix commands (rfx help pre and rfx help fix for explanations).SCSM Service Request: Service Request Management Pack v0.1: !This is an ALPHA release. Please use for testing purposes only.! The management pack is not sealed which means that when a new version of the Se...SFTP Component for .NET CSharp, VB.NET, and ASP.NET: SFTP WinForms Client: SFTP WinForms ClientSharePoint Feature - Version history list Export to Excel: Export Item List Version 1.1: - allows you to select columns to export - multilanguage support Czech, English - some bug fix Install: "C:\Program Files\Common Files\Microsoft S...SharePoint Outlook Connector: Source Code for Version 1.2.4.3: Source Code for Version 1.2.4.3SharePoint PowerRSS: v1.0: Easy/Clean way to get SharePoint list data via more standard RSS feed. I found CleanRSS.aspx as part of SPRSS: Enhanced RSS Functionality for WSS ...Smith Async .NET Memcached Client: Smith.Amc 0.7.3810.36347: Smith Async Memcached client release 0.7.3810.36347 available First public release available. All memcached operations has been implemented except...Star Trooper for XNA 2D Tutorial: Lesson five content: Here is Lesson five original content for the StarTrooper 2D XNA tutorial. The blog tutorial has now started over on http://xna-uk.net/blogs/darkge...Star Trooper for XNA 2D Tutorial: Lesson six content: Here is Lesson six original content for the StarTrooper 2D XNA tutorial. The blog tutorial has now started over on http://xna-uk.net/blogs/darkgen...Stripper: Stripper.exe version 0.1.0: Stripper Remove Diacritics and other unwanted caracters to fabric a more standardized file naming. Especially French caracter and maybe other lang...SystemCentered Operations Manager Reporting: SystemCentered Reports V1: Reports Windows Computer General Performance When troubleshooting performance problems there are typically a set of "go to" performance counters t...TFS Buddy: TFS Buddy Beta 1.1: Minor changes +Added repeat function in action tab to simplyfy creating actions +Added app manifest to make the exe require run as Admin ~How the I...Thumbnail creator and image resizer: ThumbnailCreator1.2.1: ThumbnailCreator1.2.1 added importing of namespaces to .vb(previously in web.config)TokyoTyrantClient: TokyoTyrantClient release: 该客户端有如下特点: 1.支持TcpClient连接池 2.支持UTF-8编码 3.支持初始化链接数,链接过期时间,最大空闲时间,最长工作时间等设置。Ultimate FTP Component for .NET C#, VB.NET and ASP.NET: Build 519: New Release Download setup package at: http://www.componentsoft.net/component/download/?name=UltimateFtp Product Home Page: http://www.componentsof...visinia: visinia_1.2: The new stable version of visinia cms is out, it is visinia 1.2. It has many new features like the admin is one more time is given a new look. the ...WCF 4 Templates for Visual Studio 2010: AnonymousOverHttp: This template generates a WCF service application that exposes a BasicHttpBinding endpoint with maxed message size and reader quotas to provide an ...WhiteMoon: WhiteMoon 0.2.10 Source: The Source code of WhiteMoon 0.2 build 10Most Popular ProjectsCommunity Forums NNTP bridgeASP.NET MVC Time PlannerMoonyDesk (windows desktop widgets)NeatUploadOutSyncViperWorks IgnitionAgUnit - Silverlight unit testing with ReSharperSmith Async .NET Memcached ClientASP.NET MVC ExtensionsAviva Solutions C# Coding GuidelinesMost Active ProjectsCommunity Forums NNTP bridgepatterns & practices – Enterprise LibraryRawrjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog ModuleGMap.NET - Great Maps for Windows Forms & PresentationN2 CMSStyleCopsmark C# LibraryBlogEngine.NET

    Read the article

  • Simple Excel Export with EPPlus

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/10/30/simple-excel-export-with-epplus.aspxAnyone I’ve ever met who works with an application that sits in front of a lot of data loves it when they can get that data exported to an Excel file for them to mess around with offline. As both developer and end user of a little website project that I’ve been working on, I found myself wanting to be able to get a bunch of the data that the application was collecting into an Excel file. The great thing about being both an end user and a developer on a project is that you can build the features that you really want! While putting this feature together I came across the fantastic EPPlus library. This library is certainly very well known and popular, but I was so impressed with it that I thought it was worth a quick blog post. This library is extremely powerful; it lets you create and manipulate Excel 2007/2010 spreadsheets in .NET code with a high degree of flexibility. My only gripe with the project is that they are not touting how insanely easy it is to build a basic Excel workbook from a simple data source. If I were running this project the approach I’m about to demonstrate in this post would be front and center on the landing page for the project because it shows how easy it really is to get started and serves as a good way to ease yourself in to some of the more advanced features. The website in question uses RavenDB, which means that we’re dealing with POCOs to model the data throughout all layers of the application. I love working like this so when it came time to figure out how to export some of this data to an Excel spreadsheet I wanted to find a way to take an IEnumerable<T> and just have it dumped to Excel with each item in the collection being modeled as a single row in the Excel worksheet. Consider the following class: public class Employee { public int Id { get; set; } public string Name { get; set; } public decimal HourlyRate { get; set; } public DateTime HireDate { get; set; } } Now let’s say we have a collection of these represented as an IEnumerable<Employee> and we want to be able to output it to an Excel file for offline querying/manipulation. As it turns out, this is dead simple to do with EPPlus. Have a look: public void ExportToExcel(IEnumerable<Employee> employees, FileInfo targetFile) { using (var excelFile = new ExcelPackage(targetFile)) { var worksheet = excelFile.Workbook.Worksheets.Add("Sheet1"); worksheet.Cells["A1"].LoadFromCollection(Collection: employees, PrintHeaders: true); excelFile.Save(); } } That’s it. Let’s break down what’s going on here: Create a ExcelPackage to model the workbook (Excel file). Note that the ‘targetFile’ value here is a FileInfo object representing the location on disk where I want the file to be saved. Create a worksheet within the workbook. Get a reference to the top-leftmost cell (addressed as A1) and invoke the ‘LoadFromCollection’ method, passing it our collection of Employee objects. Behind the scenes this is reflecting over the properties of the type provided and pulling out any public members to become columns in the resulting Excel output. The ‘PrintHeaders’ parameter tells EPPlus to grab the name of the property and put it in the first row. Save the Excel file All of the heavy lifting here is being done by the ‘LoadFromCollection’ method, and that’s a good thing. Now, this was really easy to do, but it has some limitations. Using this approach you get a very plain, un-styled Excel worksheet. The column widths are all set to the default. The number format for all cells is ‘General’ (which proves particularly interesting if you have a DateTime property in your data source). I’m a “no frills” guy, so I wasn’t bothered at all by trading off simplicity for style and formatting. That said, EPPlus has tons of samples that you can download that illustrate how to apply styles and formatting to cells and a ton of other advanced features that are way beyond the scope of this post.

    Read the article

  • A Graduate&rsquo;s Journey at Oracle &ndash; Bhaskar Ghosh From Oracle India

    - by david.talamelli
    I am Bhaskar Ghosh, and I work as an Applications Engineer with Oracle. Well, it was three years ago when my journey with one of the largest software companies started. It was a fine day and a decisive moment, when I was placed in Oracle as a campus recruit from College of Engineering Guindy, Anna University, Chennai! I always thought of looking back, the time that helped me learn beyond my boundaries, think broader and ahead, and grow – technically, professionally and personally. Hmmn! Let me recall the eventful moments once again. My first day as an intern at Oracle started in late 2007. I met one of the Oracle Managers at the Oracle Campus in Hyderabad and on the same day I also met another Oracle employee who was to later to become my first manager. I was charged and thrilled with the environment and the wonderful people around me! I was joined by two other interns, who also had a Masters in Computer Applications. We formed a very friendly group with all the interns and the new hires, and shared our excitement and learning. Myself and one of the other Graduates started working on a very interesting project on Semantic technology. We finally had our names added as co-developers for this very project. This phase of five months was the time and we learnt tremendously and worked very hard, partly because we had to travel back and forth to our colleges to submit reports and present for the Masters in Computer Applications final year project reviews. After completing my MCA, I joined as a full-time employee in 2008. During the next year, we worked on interesting and bleeding edge technologies - OWL, RDF, SPARQL, Visualization, J2EE, Social Web features, Semantic Web technologies, Web Services and many more! We developed cool, rich internet and desktop applications. Little did I know at that time, that this learning would help me tremendously for my the next project in Oracle. The following year saw me being assigned a role in a different project that my other team members were working on for the last two years. It took me two months to understand and get into a flow with this new task. I was fortunate that this phase helped me enhance my inter-personal and communication skills, as much as it helped me grow professionally with better ability to tackle multiple priorities and switch between tasks based on the team’s requirements. I was made the POC for all communications with our team and other product teams. I personally feel that this time enhanced me tremendously in technologies like Oracle Forms, J2EE, and Java and Web Services. The last six months, saw myself becoming an Institute of Electrical and Electronics Engineer member, and continuing my higher education International Institute of Information Technology, Hyderabad. Oracle supports its employees becoming members of professional bodies, and higher studies are supported by management, I think it is tremendously helpful in the professional and technical growth of the employees. Last three months, I have been working on great and useful enhancements to our product. Ah beautiful! All these years, there have been other moments and events of fun that are too worth mentioning. Clubs and groups at Oracle such as Employee Club, Oracle Volunteers, Football Club, etc. have always kept on organizing numerous events and competitions, full of fun and entertainment. I really enjoyed participating, even if it was small, in the intra-Oracle football tourney, Oracle Volunteer Days, OraFora, OraOvations, and a few more. Those ‘Seasons of Sharing’, those ‘Blood Donation camps’, those ‘Diwali and Christmas gifts and events’, those ‘fun events at the annual function called OraOvations’, those ‘books and cycle stalls’, and those so many other things… It only fills my mind with pleasure. The last three years have been very eventful:they have been full of learning and growth, and under the very able and encouraging guidance of my manager. I have got the opportunity to know about and/or interact with many wonderful personalities, and learn from them, here at Oracle. The environment, the people, and the fellow developers have been so friendly, and always ever ready to help, when we were in doubt.. I really love the big office space, and the flexible timings, and the caring people around. I look forward to a beautiful, learning and motivating journey with Oracle.

    Read the article

  • CodePlex Daily Summary for Tuesday, February 01, 2011

    CodePlex Daily Summary for Tuesday, February 01, 2011Popular ReleasesWatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.12.07: Whats New Added CKEditor 3.5.1 (Rev. 6398) - Whats New File Browser now List all Anchor on selected Dnn Page (Tab) changes File Browser now uses DNN Cache instead of HTTP Session for Authorization Using now Google Hosted CDN Versions of jQuery and jQuery-UI Scripts (Auto detects if needed http or https)Chemistry Add-in for Word: Chemistry Add-in for Word - Version 1.0: On February 1, 2011, we announced the availability of version 1 of the Chemistry Add-in for Word, as well as the assignment of the open source project to the Outercurve Foundation by Microsoft Research and the University of Cambridge. System RequirementsHardware RequirementsAny computer that can run Office 2007 or Office 2010. Software RequirementsYour computer must have the following software: Any version of Windows that can run Office 2007 or Office 2010, which includes Windows XP SP3 and...StyleCop for ReSharper: StyleCop for ReSharper 5.1.15005.000: Applied patch from rodpl for merging of stylecop setting files with settings in parent folder. Previous release: A considerable amount of work has gone into this release: Huge focus on performance around the violation scanning subsystem: - caching added to reduce IO operations around reading and merging of settings files - caching added to reduce creation of expensive objects Users should notice condsiderable perf boost and a decrease in memory usage. Bug Fixes: - StyleCop's new Objec...Minecraft Tools: Minecraft Topographical Survey 1.4: MTS requires version 4 of the .NET Framework - you must download it from Microsoft if you have not previously installed it. This version of MTS adds MCRegion support and fixes bugs that caused rendering to fail for some users. New in this version of MTS: Support for rendering worlds compressed with MCRegion Fixed rendering failure when encountering non-NBT files with the .dat extension Fixed rendering failure when encountering corrupt NBT files Minor GUI updates Note that the command...MVC Controls Toolkit: Mvc Controls Toolkit 0.8: Fixed the following bugs: *Variable name error in the jvascript file that prevented the use of the deleted item template of the Datagrid *Now after the changes applied to an item of the DataGrid are cancelled all input fields are reset to the very initial value they had. *Other minor bugs. Added: *This version is available both for MVC2, and MVC 3. The MVC 3 version has a release number of 0.85. This way one can install both version. *Client Validation support has been added to all control...Office Web.UI: Beta preview (Source): This is the first Beta. it includes full source code and all available controls. Some designers are not ready, and some features are not finalized allready (missing properties, draft styles) ThanksASP.net Ribbon: Version 2.2: This release brings some new controls (part of Office Web.UI). A few bugs are fixed and it includes the "auto resize" feature as you resize the window. (It can cause an infinite loop when the window is too reduced, it's why this release is not marked as "stable"). I will release more versions 2.3, 2.4... until V3 which will be the official launch of Office Web.UI. Both products will evolve at the same speed. Thanks.Barcode Rendering Framework: 2.1.1.0: Final release for VS2008 Finally fixed bugs with code 128 symbology.xUnit.net - Unit Testing for .NET: xUnit.net 1.7: xUnit.net release 1.7Build #1540 Important notes for Resharper users: Resharper support has been moved to the xUnit.net Contrib project. Important note for TestDriven.net users: If you are having issues running xUnit.net tests in TestDriven.net, especially on 64-bit Windows, we strongly recommend you upgrade to TD.NET version 3.0 or later. This release adds the following new features: Added support for ASP.NET MVC 3 Added Assert.Equal(double expected, double actual, int precision) Ad...DoddleReport - Automatic HTML/Excel/PDF Reporting: DoddleReport 1.0: DoddleReport will add automatic tabular-based reporting (HTML/PDF/Excel/etc) for any LINQ Query, IEnumerable, DataTable or SharePoint List For SharePoint integration please click Here PDF Reporting has been placed into a separate assembly because it requies AbcPdf http://www.websupergoo.com/download.htmSpark View Engine: Spark v1.5: Release Notes There have been a lot of minor changes going on since version 1.1, but most important to note are the major changes which include: Support for HTML5 "section" tag. Spark has now renamed its own section tag to "segment" instead to avoid clashes. You can still use "section" in a Spark sense for legacy support by specifying ParseSectionAsSegment = true if needed while you transition Bindings - this is a massive feature that further simplifies your views by giving you a powerful ...Marr DataMapper: Marr DataMapper 1.0.0 beta: First release.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.3: Version: 2.0.0.3 (Milestone 3): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...Rawr: Rawr 4.0.17 Beta: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Beta Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 and on the Version Notes page: http://rawr.codeplex.com/wikipage?title=VersionNotes As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a pre-alpha release of the WPF version, there are likely to be a lot of issues. If you...Squiggle - A Free open source LAN Messenger: Squiggle 2.5 Beta: In this release following are the new features: Localization: Support for Arabic, French, German and Chinese (Simplified) Bridge: Connect two Squiggle nets across the WAN or different subnets Aliases: Special codes with special meaning can be embedded in message like (version),(datetime),(time),(date),(you),(me) Commands: cls, /exit, /offline, /online, /busy, /away, /main Sound notifications: Get audio alerts on contact online, message received, buzz Broadcast for group: You can ri...VivoSocial: VivoSocial 7.4.2: Version 7.4.2 of VivoSocial has been released. If you experienced any issues with the previous version, please update your modules to the 7.4.2 release and see if they persist. If you have any questions about this release, please post them in our Support forums. If you are experiencing a bug or would like to request a new feature, please submit it to our issue tracker. Web Controls * Updated Business Objects and added a new SQL Data Provider File. Groups * Fixed a security issue whe...PHP Manager for IIS: PHP Manager 1.1.1 for IIS 7: This is a minor release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus several bug fixes (see change list for more details). Also, this release includes Russian language support. SHA1 codes for the downloads are: PHPManagerForIIS-1.1.0-x86.msi - 6570B4A8AC8B5B776171C2BA0572C190F0900DE2 PHPManagerForIIS-1.1.0-x64.msi - 12EDE004EFEE57282EF11A8BAD1DC1ADFD66A654mojoPortal: 2.3.6.1: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2361-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Parallel Programming with Microsoft Visual C++: Drop 6 - Chapters 4 and 5: This is Drop 6. It includes: Drafts of the Preface, Introduction, Chapters 2-7, Appendix B & C and the glossary Sample code for chapters 2-7 and Appendix A & B. The new material we'd like feedback on is: Chapter 4 - Parallel Aggregation Chapter 5 - Futures The source code requires Visual Studio 2010 in order to run. There is a known bug in the A-Dash sample when the user attempts to cancel a parallel calculation. We are working to fix this.NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel Template, version 1.0.1.160: The NodeXL Excel template displays a network graph using edge and vertex lists stored in an Excel 2007 or Excel 2010 workbook. What's NewThis release improves NodeXL's Twitter and Pajek features. See the Complete NodeXL Release History for details. Installation StepsFollow these steps to install and use the template: Download the Zip file. Unzip it into any folder. Use WinZip or a similar program, or just right-click the Zip file in Windows Explorer and select "Extract All." Close Ex...New Projectsabcdeffff: aaaaaaaaaaaaaaaaaaaaaaaaaaAutomating Variation: This project help you to automate the Site Variation in SharePoint 2010BAM Converter: DEMO Project showcasing several functionalities of Windows Phone 7 - Isolated Storage, Web Service Access, User Interface.cstgamebgs: Project for wp7DFS-Commands: A PowerShell module containing functions for manipulating Distributed File System (DFS). This allows admins to carry out DFS tasks using PowerShell without resorting to external commands such as dfsutil.exe, dfscmd.exe or modlink.exe.Disk Usage: Disk Usage is a small WPF tool to analyze the drive space on Windows. It can plot pie charts of the folder size. EPiServer Filtered Page Reference Properties: The EPiServer Filtered Page Reference properties provide you with the ability to restrict the pages in which an EPiServer can pick. The assembly once depoyed to your projects bin folder will add two new properties: -FilteredPageReferenceProperty -FilteredLinkCollectoinPropertyExample Ajax MVC address-book: This is an example application in PHP, using no framework but PHP only, utilizing MVC, SQLite, jQuery and Ajax. It is fully SOA. FlyMedia: FlyMedia is a simple music player written in C/C++ based on FMOD and Gdiplus. It aims to fly your media at a touch!Global String Formatter: The Global String Formatter library allows developers to deal with conditional string formatting in an elegant fashion. Developers specify a predicate and a corresponding string output function for each case of the formatting. The library plays well with DI frameworks.JS Mixer: JS Mixer is a simple UI over the YUI Compressor for .Net Library. It allows you to merge and minimize javascript files easily.LAPD: Lapd (Location and Attendance to Dependant People) make care-dependent people's life easier, improving the communication between their care providers and them. It is developed in C# over .NET Compact Framework 3.5motion10 SharePoint Twitter Status Notes Control: Change the normal SharePoint Status control to the motion10 SharePoint Twitter Status Notes Control and you can send your tweets to Twitter! Music TD: Music TD is a Tower Defence project by Cypress Falls High School programming team. It is our first game, made in XNA.OJDetective: a win32 project for detecting your submissons on OJOpalis System Center VMM Extended Integration Pack: A Opalis Integration Pack for VMM with extended Functions to the offical IP from Microsoft.Opalis Virsto Integration Pack: A Opalis Integration Pack for Managing VirstoOne Hyper-V Storage (http://www.virsto.com) Pimp My Wave: It will be both an open source implementation of Multiloader / Kies firmware flasher and modding tool like changing boot screens directly. RESTful Connector for SharePoint 2010: This is a reusable custom connector for Business Data Connectivity Serivces in SharePoint 2010. It uses a RESTful service as a data source and XPath to map the propeties.SCWS: SCWS - XML web service for Microsoft System Center Operations Manager (AKA SCOM / OpsMgr). Developed in C# and .Net 3.5 with Visual Studio 2010. Can be used to get information on MonitoringObjects and to control maintenance mode. Ideal for integration with SCCM / ConfigMgr.somelameaspstuff: see titleSQLMap: Projeto com um Atlas do Mundo e suas divisões, salvos em tabelas no SQL Server, usando o seu módulo SpatialStackOverflow Google Chrome extension: Shows StackOverflow and StackExchange questions in new tab window in your Google ChromeSupMoul: Moulinette pour noter les supTodayTodo: This is software for manage every day tasks (one todo list for day). Silverlight (OOB), NoSQL, FullText Search for all task history

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >