Search Results

Search found 14383 results on 576 pages for 'double checked locking'.

Page 118/576 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • how to rotate a sprite using multi touch (andengine) in android?

    - by 786
    I am new to android game development. I am using andengine GLES-2. i have created sprite as a box. this box is now draggable by using this coding. it works fine. but i want multitouch on this which i want to rotate a sprite with 2 finger in that box and even it should be draggable. .... plz help someone by overwriting this code or by giving exact example of this doubt... i am trying this many days but no idea. final float centerX = (CAMERA_WIDTH - this.mBox.getWidth()) / 2; final float centerY = (CAMERA_HEIGHT - this.mBox.getHeight()) / 2; Box= new Sprite(centerX, centerY, this.mBox, this.getVertexBufferObjectManager()) { public boolean onAreaTouched(TouchEvent pSceneTouchEvent, float pTouchAreaLocalX, float pTouchAreaLocalY) { this.setPosition(pSceneTouchEvent.getX() - this.getWidth()/ 2, pSceneTouchEvent.getY() - this.getHeight() / 2); float pValueX = pSceneTouchEvent.getX(); float pValueY = CAMERA_HEIGHT-pSceneTouchEvent.getY(); float dx = pValueX - gun.getX(); float dy = pValueY - gun.getY(); double Radius = Math.atan2(dy,dx); double Angle = Radius * 360 ; Box.setRotation((float)Math.toDegrees(Angle)); return true; } thanks

    Read the article

  • Class or Dictionary

    - by user2038134
    I want to create a immutable Scale class in C#. public sealed class Scale { string _Name; string _Description; SomeOrderedCollection _ScaleValueDefinitions; Unit _Unit // properties .... // methods ContainsValue(double value) .... // constructors // all parameters except scalevaluedefinitions are optional // for a Scale to be useful atleast 1 ScaleValueDefinition should exist public Scale(string name, string description, SomeOrderedCollection scaleValueDefinitions, unit) { /* initialize */} } so first a ScaleValueDefinition should be represented by to values: Value (double) Definition (string) these values are known before the Scale class is created and should be unique. so what is the best approach. create a immutable class ScaleValueDefinition with value and definition as properties and use it in a list. use a dictionary. use another way i didn't think of... and how to implement it. for option 1. i can use params ScaleValueDefinition[] ValueDefinitions in the constructor, but how to do it for the other options? and as last at what amount of value's (properties) should i choose one option over the other?

    Read the article

  • How come I cannot make this file executable (chmod permissions)?

    - by bappi48
    I downloaded Android Development Tool for linux (ADT) and placed it in home directory. After unzipping the files, when I double click the "eclipse" executable file; the eclipse works perfectly fine. But If I unzip the ADT in a different directory, in my case directory E: (is shown when I boot in windows 7) There double clicking the same "eclipse" executable file does not run eclipse. It shows error message: Could not display /media/Software/00.AndroidLinux/ADT/eclipse/eclipse. There is no application installed for executable files. Do you want to search for an application to open this file? If I press yes in the Dialog, it finds "Pypar2" which is not my solution. I found that the "eclipse" file permission is following -rw------- 1 tanvir tanvir 63050 Feb 4 19:05 eclipse I tried to change the permission by "chmod +x eclipse" , but no use. This command does not change the file permission at all in this case. what should I do? Relevant output of cat /proc/mounts: /dev/sda6 /media/Software fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 Please not that I'm new to Ubuntu and still learning day by day.

    Read the article

  • Observing MVC, can/should the Model be instantiated in the ViewController? Or where?

    - by user19410
    I'm writing an experimental iPhone app to learn about the MVC paradigm. I instantiate my Model class in the ViewController class. Is this stupid? I'm asking because storing the id of the Model class, and using it works where it's initialized, but referring to it later (in response to an interface action) crashes. Seemingly, the pointer address of my Model class instance changes, but how can that be? The code in question: @interface Soundcheck_Tone_GeneratorViewController : UIViewController { IBOutlet UIPickerView * frequencyWheel; @public Sinewave_Generation * sineGenerator; } @property(nonatomic,retain) Sinewave_Generation * sineGenerator; @end @implementation Soundcheck_Tone_GeneratorViewController @synthesize sineGenerator; - (void)viewDidLoad { [super viewDidLoad]; [self setSineGenerator:[[Sinewave_Generation alloc] initWithFrequency:20.0]]; // using reference -> fine } // pickerView handling is omitted here... - (void)pickerView:(UIPickerView *)thePickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component { [[self sineGenerator] setFrequency:20.0]; // using reference -> crash } @end // the Sinewave_Generation class... only to be thorough. Works fine so far. @interface Sinewave_Generation : NSObject { AudioComponentInstance toneUnit; @public double frequency,theta; } @property double frequency; - (Sinewave_Generation *) initWithFrequency: (int) f; @end @implementation Sinewave_Generation @synthesize frequency; - (Sinewave_Generation *) initWithFrequency: (int) f { self = [super init]; if ( self ) { [self setFrequency: f]; } return self; } @end

    Read the article

  • How do I have an arrow follow different height parabolas depending on how long the player holds down a key?

    - by Moondustt
    i'm trying to throw an arrow in my game, but i'm having a hard time trying to realize how to make a good parabola. What I need: The more you hold "enter" stronger the arrow goes. The arrow angle will be always the same, 45 degrees. This is what I have already have: private float velocityHeld = 1f; protected override void Update(GameTime gameTime) { private void GetKeyboardEvent() { if (Keyboard.GetState().IsKeyDown(Keys.Enter) && !released) { timeHeld += velocityHeld; holding = true; } else { if (holding) { released = true; holding = false; lastTimeHeld = timeHeld; } } } if (released && timeHeld > 0) { float alpha = MathHelper.ToRadians(45f); double vy = timeHeld * Math.Sin(alpha); double vx = timeHeld * Math.Cos(alpha); ShadowPosition.Y -= (int)vy; ShadowPosition.X += (int)vx; timeHeld -= velocityHeld; } else { released = false; } } My question is, what do I need to do to make the arrow to go bottom as it loses velocity (timeHeld) to make a perfect parabola?

    Read the article

  • Compare images after canny edge detection in OpenCV (C++)

    - by typoknig
    Hi all, I am working on an OpenCV project and I need to compare some images after canny has been applied to both of them. Before the canny was applied I had the gray scale images populating a histogram and then I compared the histograms, but when canny is added to the images the histogram does not populate. I have read that a canny image can populate a histogram, but have not found a way to make it happen. I do not necessairly need to keep using the histograms, I just want to know the best way to compare two canny images. SSCCE below for you to chew on. I have poached and patched about 75% of this code from books and various sites on the internet, so props to those guys... // SLC (Histogram).cpp : Defines the entry point for the console application. #include "stdafx.h" #include <cxcore.h> #include <cv.h> #include <cvaux.h> #include <highgui.h> #include <stdio.h> #include <sstream> #include <iostream> using namespace std; IplImage* image1= 0; IplImage* imgHistogram1 = 0; IplImage* gray1= 0; CvHistogram* hist1; int main(){ CvCapture* capture = cvCaptureFromCAM(0); if(!cvQueryFrame(capture)){ cout<<"Video capture failed, please check the camera."<<endl; } else{ cout<<"Video camera capture successful!"<<endl; }; CvSize sz = cvGetSize(cvQueryFrame(capture)); IplImage* image = cvCreateImage(sz, 8, 3); IplImage* imgHistogram = 0; IplImage* gray = 0; CvHistogram* hist; cvNamedWindow("Image Source",1); cvNamedWindow("gray", 1); cvNamedWindow("Histogram",1); cvNamedWindow("BG", 1); cvNamedWindow("FG", 1); cvNamedWindow("Canny",1); cvNamedWindow("Canny1", 1); image1 = cvLoadImage("image bin/use this image.jpg");// an image has to load here or the program will not run //size of the histogram -1D histogram int bins1 = 256; int hsize1[] = {bins1}; //max and min value of the histogram float max_value1 = 0, min_value1 = 0; //value and normalized value float value1; int normalized1; //ranges - grayscale 0 to 256 float xranges1[] = { 0, 256 }; float* ranges1[] = { xranges1 }; //create an 8 bit single channel image to hold a //grayscale version of the original picture gray1 = cvCreateImage( cvGetSize(image1), 8, 1 ); cvCvtColor( image1, gray1, CV_BGR2GRAY ); IplImage* canny1 = cvCreateImage(cvGetSize(gray1), 8, 1 ); cvCanny( gray1, canny1, 55, 175, 3 ); //Create 3 windows to show the results cvNamedWindow("original1",1); cvNamedWindow("gray1",1); cvNamedWindow("histogram1",1); //planes to obtain the histogram, in this case just one IplImage* planes1[] = { canny1 }; //get the histogram and some info about it hist1 = cvCreateHist( 1, hsize1, CV_HIST_ARRAY, ranges1,1); cvCalcHist( planes1, hist1, 0, NULL); cvGetMinMaxHistValue( hist1, &min_value1, &max_value1); printf("min: %f, max: %f\n", min_value1, max_value1); //create an 8 bits single channel image to hold the histogram //paint it white imgHistogram1 = cvCreateImage(cvSize(bins1, 50),8,1); cvRectangle(imgHistogram1, cvPoint(0,0), cvPoint(256,50), CV_RGB(255,255,255),-1); //draw the histogram :P for(int i=0; i < bins1; i++){ value1 = cvQueryHistValue_1D( hist1, i); normalized1 = cvRound(value1*50/max_value1); cvLine(imgHistogram1,cvPoint(i,50), cvPoint(i,50-normalized1), CV_RGB(0,0,0)); } //show the image results cvShowImage( "original1", image1 ); cvShowImage( "gray1", gray1 ); cvShowImage( "histogram1", imgHistogram1 ); cvShowImage( "Canny1", canny1); CvBGStatModel* bg_model = cvCreateFGDStatModel( image ); for(;;){ image = cvQueryFrame(capture); cvUpdateBGStatModel( image, bg_model ); //Size of the histogram -1D histogram int bins = 256; int hsize[] = {bins}; //Max and min value of the histogram float max_value = 0, min_value = 0; //Value and normalized value float value; int normalized; //Ranges - grayscale 0 to 256 float xranges[] = {0, 256}; float* ranges[] = {xranges}; //Create an 8 bit single channel image to hold a grayscale version of the original picture gray = cvCreateImage(cvGetSize(image), 8, 1); cvCvtColor(image, gray, CV_BGR2GRAY); IplImage* canny = cvCreateImage(cvGetSize(gray), 8, 1 ); cvCanny( gray, canny, 55, 175, 3 );//55, 175, 3 with direct light //Planes to obtain the histogram, in this case just one IplImage* planes[] = {canny}; //Get the histogram and some info about it hist = cvCreateHist(1, hsize, CV_HIST_ARRAY, ranges,1); cvCalcHist(planes, hist, 0, NULL); cvGetMinMaxHistValue(hist, &min_value, &max_value); //printf("Minimum Histogram Value: %f, Maximum Histogram Value: %f\n", min_value, max_value); //Create an 8 bits single channel image to hold the histogram and paint it white imgHistogram = cvCreateImage(cvSize(bins, 50),8,3); cvRectangle(imgHistogram, cvPoint(0,0), cvPoint(256,50), CV_RGB(255,255,255),-1); //Draw the histogram for(int i=0; i < bins; i++){ value = cvQueryHistValue_1D(hist, i); normalized = cvRound(value*50/max_value); cvLine(imgHistogram,cvPoint(i,50), cvPoint(i,50-normalized), CV_RGB(0,0,0)); } double correlation = cvCompareHist (hist1, hist, CV_COMP_CORREL); double chisquare = cvCompareHist (hist1, hist, CV_COMP_CHISQR); double intersection = cvCompareHist (hist1, hist, CV_COMP_INTERSECT); double bhattacharyya = cvCompareHist (hist1, hist, CV_COMP_BHATTACHARYYA); double difference = (1 - correlation) + chisquare + (1 - intersection) + bhattacharyya; printf("correlation: %f\n", correlation); printf("chi-square: %f\n", chisquare); printf("intersection: %f\n", intersection); printf("bhattacharyya: %f\n", bhattacharyya); printf("difference: %f\n", difference); cvShowImage("Image Source", image); cvShowImage("gray", gray); cvShowImage("Histogram", imgHistogram); cvShowImage( "Canny", canny); cvShowImage("BG", bg_model->background); cvShowImage("FG", bg_model->foreground); //Page 19 paragraph 3 of "Learning OpenCV" tells us why we DO NOT use "cvReleaseImage(&image)" in this section cvReleaseImage(&imgHistogram); cvReleaseImage(&gray); cvReleaseHist(&hist); cvReleaseImage(&canny); char c = cvWaitKey(10); //if ASCII key 27 (esc) is pressed then loop breaks if(c==27) break; } cvReleaseBGStatModel( &bg_model ); cvReleaseImage(&image); cvReleaseCapture(&capture); cvDestroyAllWindows(); }

    Read the article

  • Binary Cosine Cofficient

    - by hairyyak
    I was given the following forumulae for calculating this sim=|QnD| / v|Q|v|D| I went ahed and implemented a class to compare strings consisting of a series of words #pragma once #include <vector> #include <string> #include <iostream> #include <vector> using namespace std; class StringSet { public: StringSet(void); StringSet( const string the_strings[], const int no_of_strings); ~StringSet(void); StringSet( const vector<string> the_strings); void add_string( const string the_string); bool remove_string( const string the_string); void clear_set(void); int no_of_strings(void) const; friend ostream& operator <<(ostream& outs, StringSet& the_strings); friend StringSet operator *(const StringSet& first, const StringSet& second); friend StringSet operator +(const StringSet& first, const StringSet& second); double binary_coefficient( const StringSet& the_second_set); private: vector<string> set; }; #include "StdAfx.h" #include "StringSet.h" #include <iterator> #include <algorithm> #include <stdexcept> #include <iostream> #include <cmath> StringSet::StringSet(void) { } StringSet::~StringSet(void) { } StringSet::StringSet( const vector<string> the_strings) { set = the_strings; } StringSet::StringSet( const string the_strings[], const int no_of_strings) { copy( the_strings, &the_strings[no_of_strings], back_inserter(set)); } void StringSet::add_string( const string the_string) { try { if( find( set.begin(), set.end(), the_string) == set.end()) { set.push_back(the_string); } else { //String is already in the set. throw domain_error("String is already in the set"); } } catch( domain_error e) { cout << e.what(); exit(1); } } bool StringSet::remove_string( const string the_string) { //Found the occurrence of the string. return it an iterator pointing to it. vector<string>::iterator iter; if( ( iter = find( set.begin(), set.end(), the_string) ) != set.end()) { set.erase(iter); return true; } return false; } void StringSet::clear_set(void) { set.clear(); } int StringSet::no_of_strings(void) const { return set.size(); } ostream& operator <<(ostream& outs, StringSet& the_strings) { vector<string>::const_iterator const_iter = the_strings.set.begin(); for( ; const_iter != the_strings.set.end(); const_iter++) { cout << *const_iter << " "; } cout << endl; return outs; } //This function returns the union of the two string sets. StringSet operator *(const StringSet& first, const StringSet& second) { vector<string> new_string_set; new_string_set = first.set; for( unsigned int i = 0; i < second.set.size(); i++) { vector<string>::const_iterator const_iter = find(new_string_set.begin(), new_string_set.end(), second.set[i]); //String is new - include it. if( const_iter == new_string_set.end() ) { new_string_set.push_back(second.set[i]); } } StringSet the_set(new_string_set); return the_set; } //This method returns the intersection of the two string sets. StringSet operator +(const StringSet& first, const StringSet& second) { //For each string in the first string look though the second and see if //there is a matching pair, in which case include the string in the set. vector<string> new_string_set; vector<string>::const_iterator const_iter = first.set.begin(); for ( ; const_iter != first.set.end(); ++const_iter) { //Then search through the entire second string to see if //there is a duplicate. vector<string>::const_iterator const_iter2 = second.set.begin(); for( ; const_iter2 != second.set.end(); const_iter2++) { if( *const_iter == *const_iter2 ) { new_string_set.push_back(*const_iter); } } } StringSet new_set(new_string_set); return new_set; } double StringSet::binary_coefficient( const StringSet& the_second_set) { double coefficient; StringSet intersection = the_second_set + set; coefficient = intersection.no_of_strings() / sqrt((double) no_of_strings()) * sqrt((double)the_second_set.no_of_strings()); return coefficient; } However when I try and calculate the coefficient using the following main function: // Exercise13.cpp : main project file. #include "stdafx.h" #include <boost/regex.hpp> #include "StringSet.h" using namespace System; using namespace System::Runtime::InteropServices; using namespace boost; //This function takes as input a string, which //is then broken down into a series of words //where the punctuaction is ignored. StringSet break_string( const string the_string) { regex re; cmatch matches; StringSet words; string search_pattern = "\\b(\\w)+\\b"; try { // Assign the regular expression for parsing. re = search_pattern; } catch( regex_error& e) { cout << search_pattern << " is not a valid regular expression: \"" << e.what() << "\"" << endl; exit(1); } sregex_token_iterator p(the_string.begin(), the_string.end(), re, 0); sregex_token_iterator end; for( ; p != end; ++p) { string new_string(p->first, p->second); String^ copy_han = gcnew String(new_string.c_str()); String^ copy_han2 = copy_han->ToLower(); char* str2 = (char*)(void*)Marshal::StringToHGlobalAnsi(copy_han2); string new_string2(str2); words.add_string(new_string2); } return words; } int main(array<System::String ^> ^args) { StringSet words = break_string("Here is a string, with some; words"); StringSet words2 = break_string("There is another string,"); cout << words.binary_coefficient(words2); return 0; } I get an index which is 1.5116 rather than a value from 0 to 1. Does anybody have a clue why this is the case? Any help would be appreciated.

    Read the article

  • How can I fix my program from crashing in C++?

    - by Rachel
    I'm very new to programming and I am trying to write a program that adds and subtracts polynomials. My program sometimes works, but most of the time, it randomly crashes and I have no idea why. It's very buggy and has other problems I'm trying to fix, but I am unable to really get any further coding done since it crashes. I'm completely new here but any help would be greatly appreciated. Here's the code: #include <iostream> #include <cstdlib> using namespace std; int getChoice(void); class Polynomial10 { private: double* coef; int degreePoly; public: Polynomial10(int max); //Constructor for a new Polynomial10 int getDegree(){return degreePoly;}; void print(); //Print the polynomial in standard form void read(); //Read a polynomial from the user void add(const Polynomial10& pol); //Add a polynomial void multc(double factor); //Multiply the poly by scalar void subtract(const Polynomial10& pol); //Subtract polynom }; void Polynomial10::read() { cout << "Enter degree of a polynom between 1 and 10 : "; cin >> degreePoly; cout << "Enter space separated coefficients starting from highest degree" << endl; for (int i = 0; i <= degreePoly; i++) { cin >> coef[i]; } } void Polynomial10::print() { for(int i=0;i<=degreePoly;i++) { if(coef[i] == 0) { cout << ""; } else if(i>=0) { if(coef[i] > 0 && i!=0) { cout<<"+"; } if((coef[i] != 1 && coef[i] != -1) || i == degreePoly) { cout << coef[i]; } if((coef[i] != 1 && coef[i] != -1) && i!=degreePoly ) { cout << "*"; } if (i != degreePoly && coef[i] == -1) { cout << "-"; } if(i != degreePoly) { cout << "x"; } if ((degreePoly - i) != 1 && i != degreePoly) { cout << "^"; cout << degreePoly-i; } } } } void Polynomial10::add(const Polynomial10& pol) { for(int i = 0; i<degreePoly; i++) { int degree = degreePoly; coef[degreePoly-i] += pol.coef[degreePoly-(i+1)]; } } void Polynomial10::subtract(const Polynomial10& pol) { for(int i = 0; i<degreePoly; i++) { coef[degreePoly-i] -= pol.coef[degreePoly-(i+1)]; } } void Polynomial10::multc(double factor) { //int degreePoly=0; //double coef[degreePoly]; cout << "Enter the scalar multiplier : "; cin >> factor; for(int i = 0; i<degreePoly; i++) { coef[i] *= factor; } }; Polynomial10::Polynomial10(int max) { degreePoly=max; coef = new double[degreePoly]; for(int i; i<degreePoly; i++) { coef[i] = 0; } } int main() { int choice; Polynomial10 p1(1),p2(1); cout << endl << "CGS 2421: The Polynomial10 Class" << endl << endl << endl; cout << "0. Quit\n" << "1. Enter polynomial\n" << "2. Print polynomial\n" << "3. Add another polynomial\n" << "4. Subtract another polynomial\n" << "5. Multiply by scalar\n\n"; int choiceFirst = getChoice(); if (choiceFirst != 1) { cout << "Enter a Polynomial first!"; } if (choiceFirst == 1) {choiceFirst = choice;} while(choice != 0) { switch(choice) { case 0: return 0; case 1: p1.read(); break; case 2: p1.print(); break; case 3: p2.read(); p1.add(p2); cout << "Updated Polynomial: "; p1.print(); break; case 4: p2.read(); p1.subtract(p2); cout << "Updated Polynomial: "; p1.print(); break; case 5: p1.multc(10); cout << "Updated Polynomial: "; p1.print(); break; } choice = getChoice(); } return 0; } int getChoice(void) { int c; cout << "\nEnter your choice : "; cin >> c; return c; }

    Read the article

  • StringIndexOutOfBoundsException error in main method

    - by Ro Siv
    I am obtaining a StringIndexOutOfBoundsError when running my main method. Here is the output of my program in the command line. "Please enter the shift, 1 for day, 2 for night" 1 "you entered a number for the shift" "Please enter the hourly pay Rate" 2 "you entered a number for the pay Rate" "Please enter the employees name" brenda "cat6b" "your value you entered is correct 0-9 or a - z" "Please enter the employee number" 100e "cat41" "your value you entered is correct 0-9 or a - z" "Please enter current date in XXYYZZZZ format, X is day, Y is month, Z is year" 10203933 "cat81 " "your value you entered is correct 0-9 or a - z" 90 1 valye of array is 1 81 0 value of array is 0 82 2 value of array is 2 83 0 value of array is 0 84 3 value of array is 3 85 9 value of array is 9 86 3 value of array is 3 87 3 value of array is 3 "Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String index out of range: 8 at java.lang.String.charAt(String.java:658) at ProductionWorker<init>(ProductionWorker.java:66) at labBookFiftyFour.main(labBookFiftyFour.java:58)" "Press any key to continue . . ." Ignore the cat parts in the code, i was using a println statement to test the code. Ignore the value of array output as well, as i wanted to use an array in the program later on. Here is my main method. import java.math.*; import java.text.DecimalFormat; import java.io.*; import java.util.*; public class labBookFiftyFour { public static void main(String[] args) { Scanner myInput = new Scanner(System.in); int shift = -1; double pRate = -2; String name = " "; String number = " "; String date = " "; while(shift < 0 || pRate < 0 ) { System.out.println("Please enter the shift, 1 for day, 2 for night"); if(myInput.hasNextInt()){ System.out.println("you entered a number for the shift"); shift = myInput.nextInt(); } System.out.println("Please enter the hourly pay Rate"); if(myInput.hasNextDouble()){ System.out.println("you entered a number for the pay Rate"); pRate = myInput.nextDouble(); } else if (myInput.hasNext()) { System.out.println("Please enter a proper value"); myInput.next(); } else { System.err.println("No more input"); System.exit(1); } } myInput.nextLine(); //consume newLine System.out.println("Please enter the employees name"); name = myInput.nextLine(); //use your isValid method if(isValid(name)) { System.out.println("your value you entered is correct 0-9 or a - z "); } System.out.println("Please enter the employee number"); number = myInput.nextLine(); //use your isValid method if(isValid(number)) { System.out.println("your value you entered is correct 0-9 or a - z "); } System.out.println("Please enter current date in XXYYZZZZ format, X is day, Y is month, Z is year"); date = myInput.nextLine(); //use your isValid method if(isValid(date)) { System.out.println("your value you entered is correct 0-9 or a - z "); } ProductionWorker myWorker = new ProductionWorker(shift, pRate, name, number, date); //int day and night , double payRate System.out.println("THis is the shift " + myWorker.getShift() + " This is the pay Rate " + myWorker.getPRate() + " " + myWorker.getName() + " " + myWorker.getNumber() + " " + myWorker.getDate()); } //Made this method for testing String input for 0-9 or a - z values , put AFTER main method, but before end of class public static boolean isValid(String stringName) //This method has to be static, for some reason? { System.out.println("cat" + stringName.length() + stringName.charAt(0)); boolean flag = true; int index = 0; while(index < stringName.length()) { if(Character.isLetterOrDigit(stringName.charAt(index))) { flag = true; } else { flag = false; } ++index; } return flag; } } Here is my employeeOne. java Superclass public class employeeOne { private String name; private String number; private String date; public employeeOne(String name, String number, String date) { this.name = name; this.number = number; this.date = date; } public String getName() { return name; } public String getNumber() { return number; } public String getDate() { return date; } } Here is my ProductionWorker.java subclass, which extends employeeOne public class ProductionWorker extends employeeOne { private int shift; //shift represents day or night, day = 1, night = 2 private double pRate; //hourly pay rate public ProductionWorker(int shift, double pRate, String name, String number, String date) { super(name, number, date); this.shift = shift; if(this.shift >= 3 || this.shift <= 0) { System.out.println("You entered an out of bounds shift date, enter 1 for day or 2 for night, else shift will be day"); this.shift = 1; } this.pRate = pRate; boolean goodSoFar = true; int indexNum = 0; int indexDate = 0; if(name.length() <= 10 && number.length() <= 4 && date.length() < 9 ) { goodSoFar = true; } else { goodSoFar = false; } while(goodSoFar && indexNum < 3) //XXXL XXX digits 1-9, L is a letter A -M { if(Character.isDigit(number.charAt(indexNum))) { goodSoFar = true; } else { goodSoFar = false; } ++indexNum; } while(goodSoFar && indexNum < 4) { if(Character.isLetter(number.charAt(indexNum))) { goodSoFar = true; } else if(Character.isDigit(number.charAt(indexNum))) { goodSoFar = false; } else if(Character.isDigit(number.charAt(indexNum)) == false && Character.isLetter(number.charAt(indexNum)) == false) { goodSoFar = false; } ++indexNum; } int[] dateValues = new int[date.length()]; while(goodSoFar && indexDate <= date.length()) //XXYYZZZZ { System.out.println("" + date.length() + indexDate + " " + date.charAt(indexDate)); if(Character.isDigit(date.charAt(indexDate))) { dateValues[indexDate] = Character.getNumericValue(date.charAt(indexDate)); System.out.println("value of array is " + dateValues[indexDate]); ++indexDate; } else { goodSoFar = false; } } if(goodSoFar) { System.out.println("your input is good so far"); } else { System.out.println("your input is wrong for name or number or date"); } } public int getShift() { return shift; } public double getPRate() { return pRate; } }

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • VPN in Ubuntu fails every time

    - by fazpas
    I am trying to setup a vpn connection in Ubuntu 10.04 to use the service from relakks.com I used the network manager to add the vpn connection and the settings are: Gateway: pptp.relakks.com Username: user Password: pwd IPv4 Settings: Automatic (VPN) Advanced: MSCHAP & MSCHAPv2 checked Use point-to-point encryption (security:default) Allow BSD data compression checked Allow deflate data compression checked Use TCP header compression checked The connection always fail, here is the syslog: Jun 27 20:11:56 desktop NetworkManager: <info> Starting VPN service 'org.freedesktop.NetworkManager.pptp'... Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' started (org.freedesktop.NetworkManager.pptp), PID 2064 Jun 27 20:11:56 desktop NetworkManager: <info> VPN service 'org.freedesktop.NetworkManager.pptp' just appeared, activating connections Jun 27 20:11:56 desktop NetworkManager: <info> VPN plugin state changed: 3 Jun 27 20:11:56 desktop NetworkManager: <info> VPN connection 'Relakks' (Connect) reply received. Jun 27 20:11:56 desktop pppd[2067]: Plugin /usr/lib/pppd/2.4.5//nm-pptp-pppd-plugin.so loaded. Jun 27 20:11:56 desktop pppd[2067]: pppd 2.4.5 started by root, uid 0 Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:56 desktop NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/ppp1, iface: ppp1): no ifupdown configuration found. Jun 27 20:11:56 desktop pppd[2067]: Using interface ppp1 Jun 27 20:11:56 desktop pppd[2067]: Connect: ppp1 <--> /dev/pts/0 Jun 27 20:11:56 desktop pptp[2071]: nm-pptp-service-2064 log[main:pptp.c:314]: The synchronous pptp option is NOT activated Jun 27 20:11:57 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request' Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established. Jun 27 20:11:58 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply. Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 1024). Jun 27 20:11:59 desktop kernel: [ 56.564074] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=61 TOS=0x00 PREC=0x00 TTL=52 ID=40460 DF PROTO=47 Jun 27 20:11:59 desktop kernel: [ 56.944054] Inbound IN=ppp0 OUT= MAC= SRC=93.182.139.2 DST=186.110.76.26 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=40461 DF PROTO=47 Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[callmgr_main:pptp_callmgr.c:258]: Closing connection (shutdown) Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 12 'Call-Clear-Request' Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[pptp_read_some:pptp_ctrl.c:544]: read returned zero, peer has closed Jun 27 20:11:59 desktop pptp[2079]: nm-pptp-service-2064 log[call_callback:pptp_callmgr.c:79]: Closing connection (call state) Jun 27 20:11:59 desktop pppd[2067]: Modem hangup Jun 27 20:11:59 desktop pppd[2067]: Connection terminated. Jun 27 20:11:59 desktop NetworkManager: <info> VPN plugin failed: 1 Jun 27 20:11:59 desktop NetworkManager: SCPlugin-Ifupdown: devices removed (path: /sys/devices/virtual/net/ppp1, iface: ppp1) Jun 27 20:11:59 desktop pppd[2067]: Exit. Does someone can identify something in the syslog? I've been googling and reading about pptp but couldn't find anything about the error "read returned zero, peer has closed"

    Read the article

  • VS 2012 Code Review &ndash; Before Check In OR After Check In?

    - by Tarun Arora
    “Is Code Review Important and Effective?” There is a consensus across the industry that code review is an effective and practical way to collar code inconsistency and possible defects early in the software development life cycle. Among others some of the advantages of code reviews are, Bugs are found faster Forces developers to write readable code (code that can be read without explanation or introduction!) Optimization methods/tricks/productive programs spread faster Programmers as specialists "evolve" faster It's fun “Code review is systematic examination (often known as peer review) of computer source code. It is intended to find and fix mistakes overlooked in the initial development phase, improving both the overall quality of software and the developers' skills. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.” Wikipedia No where does the definition mention whether its better to review code before the code has been committed to version control or after the commit has been performed. No matter which side you favour, Visual Studio 2012 allows you to request for a code review both before check in and also request for a review after check in. Let’s weigh the pros and cons of the approaches independently. Code Review Before Check In or Code Review After Check In? Approach 1 – Code Review before Check in Developer completes the code and feels the code quality is appropriate for check in to TFS. The developer raises a code review request to have a second pair of eyes validate if the code abides to the recommended best practices, will not result in any defects due to common coding mistakes and whether any optimizations can be made to improve the code quality.                                             Image 1 – code review before check in Pros Everything that gets committed to source control is reviewed. Minimizes the chances of smelly code making its way into the code base. Decreases the cost of fixing bugs, remember, the earlier you find them, the lesser the pain in fixing them. Cons Development Code Freeze – Since the changes aren’t in the source control yet. Further development can only be done off-line. The changes have not been through a CI build, hard to say whether the code abides to all build quality standards. Inconsistent! Cumbersome to track the actual code review process.  Not every change to the code base is worth reviewing, a lot of effort is invested for very little gain. Approach 2 – Code Review after Check in Developer checks in, random code reviews are performed on the checked in code.                                                      Image 2 – Code review after check in Pros The code has already passed the CI build and run through any code analysis plug ins you may have running on the build server. Instruct the developer to ensure ZERO fx cop, style cop and static code analysis before check in. Code is cleaner and smell free even before the code review. No Offline development, developers can continue to develop against the source control. Cons Bad code can easily make its way into the code base. Since the review take place much later in the cycle, the cost of fixing issues can prove to be much higher. Approach 3 – Hybrid Approach The community advocates a more hybrid approach, a blend of tooling and human accountability quotient.                                                               Image 3 – Hybrid Approach 1. Code review high impact check ins. It is not possible to review everything, by setting up code review check in policies you can end up slowing your team. More over, the code that you are reviewing before check in hasn't even been through a green CI build either. 2. Tooling. Let the tooling work for you. By running static analysis, fx cop, style cop and other plug ins on the build agent, you can identify the real issues that in my opinion can't possibly be identified using human reviews. Configure the tooling to report back top 10 issues every day. Mandate the manual code review of individuals who keep making it to this list of shame more often. 3. During Merge. I would prefer eliminating some of the other code issues during merge from Main branch to the release branch. In a scrum project this is still easier because cheery picking the merges is a possibility and the size of code being reviewed is still limited. Let the tooling work for you, if some one breaks the CI build often, put them on a gated check in build course until you see improvement. If some one appears on the top 10 list of shame generated via the build then ensure that all their code is reviewed till you see improvement. At the end of the day, the goal is to ensure that the code being delivered is top quality. By enforcing a code review before any check in, you force the developer to work offline or stay put till the review is complete. What do the experts say? So I asked a few expects what they thought of “Code Review quality gate before Checking in code?" Terje Sandstrom | Microsoft ALM MVP You mean a review quality gate BEFORE checking in code????? That would mean a lot of code staying either local or in shelvesets, and not even been through a CI build, and a green CI build being the main criteria for going further, f.e. to the review state. I would not like code laying around with no checkin’s. Having a requirement that code is checked in small pieces, 4-8 hours work max, and AT LEAST daily checkins, a manual code review comes second down the lane. I would expect review quality gates to happen before merging back to main, or before merging to release.  But that would all be on checked-in code.  Branching is absolutely one way to ease the pain.   Another way we are using is automatic quality builds, running metrics, coverage, static code analysis.  Unfortunately it takes some time, would be great to be on CI’s – but…., so it’s done scheduled every night. Based on this we get, among other stuff,  top 10 lists of suspicious code, which is then subjected to reviews.  If a person seems to be very popular on these top 10 lists, we subject every check in from that person to a review for a period. That normally helps.   None of the clients I have can afford to have every checkin reviewed, so we need to find ways around it. I don’t disagree with the nicety of having all the code reviewed, but I find it hard to find those resources in today’s enterprises. David V. Corbin | Visual Studio ALM Ranger I tend to agree with both sides. I hate having code that is not checked in, but at the same time hate having “bad” code in the repository. I have found that branching is one approach to solving this dilemma. Code is checked into the private/feature branch before the review, but is not merged over to the “official” branch until after the review. I advocate both, depending on circumstance (especially team dynamics)   - The “pre-checkin” is usually for elements that may impact the project as a whole. Think of it as another “gate” along with passing unit tests. - The “post-checkin” may very well not be at the changeset level, but correlates to a review at the “user story” level.   Again, this depends on team dynamics in play…. Robert MacLean | Microsoft ALM MVP I do not think there is no right answer for the industry as a whole. In short the question is why do you do reviews? Your question implies risk mitigation, so in low risk areas you can get away with it after check in while in high risk you need to do it before check in. An example is those new to a team or juniors need it much earlier (maybe that is before checkin, maybe that is soon after) than seniors who have shipped twenty sprints on the team. Abhimanyu Singhal | Visual Studio ALM Ranger Depends on per scenario basis. We recommend post check-in reviews when: 1. We don't want to block other checks and processes on manual code reviews. Manual reviews take time, and some pieces may not require manual reviews at all. 2. We need to trace all changes and track history. 3. We have a code promotion strategy/process in place. For risk mitigation, post checkin code can be promoted to Accepted branches. Or can be rejected. Pre Checkin Reviews are used when 1. There is a high risk factor associated 2. Reviewers are generally (most of times) have immediate availability. 3. Team does not have strict tracking needs. Simply speaking, no single process fits all scenarios. You need to select what works best for your team/project. Thomas Schissler | Visual Studio ALM Ranger This is an interesting discussion, I’m right now discussing details about executing code reviews with my teams. I see and understand the aspects you brought in, but there is another side as well, I’d like to point out. 1.) If you do reviews per check in this is not very practical as a hard rule because this will disturb the flow of the team very often or it will lead to reduce the checkin frequency of the devs which I would not accept. 2.) If you do later reviews, for example if you review PBIs, it is not easy to find out which code you should review. Either you review all changesets associate with the PBI, but then you might review code which has been changed with a later checkin and the dev maybe has already fixed the issue. Or you review the diff of the latest changeset of the PBI with the first but then you might also review changes of other PBIs. Jakob Leander | Sr. Director, Avanade In my experience, manual code review: 1. Does not get done and at the very least does not get redone after changes (regardless of intentions at start of project) 2. When a project actually do it, they often do not do it right away = errors pile up 3. Requires a lot of time discussing/defining the standard and for the team to learn it However code review is very important since e.g. even small memory leaks in a high volume web solution have big consequences In the last years I have advocated following approach for code review - Architects up front do “at least one best practice example” of each type of component and tell the team. Copy from this one. This should include error handling, logging, security etc. - Dev lead on project continuously browse code to validate that the best practices are used. Especially that patterns etc. are not broken. You can do this formally after each sprint/iteration if you want. Once this is validated it is unlikely to “go bad” even during later code changes Agree with customer to rely on static code analysis from Visual Studio as the one and only coding standard. This has HUUGE benefits - You can easily tweak to reach the level you desire together with customer - It is easy to measure for both developers/management - It is 100% consistent across code base - It gets validated all the time so you never end up getting hammered by a customer review in the end - It is easy to tell the developer that you do not want code back unless it has zero errors = minimize communication You need to track this at least during nightly builds and make sure team sees total # issues. Do not allow #issues it to grow uncontrolled. On the project I run I require code analysis to have run on code before checkin (checkin rule). This means -  You have to have clean compile (or CA wont run) so this is extra benefit = very few broken builds - You can change a few of the rules to compile as errors instead of warnings. I often do this for “missing dispose” issues which you REALLY do not want in your app Tip: Place your custom CA rules files as part of solution. That  way it works when you do branching etc. (path to CA file is relative in VS) Some may argue that CA is not as good as manual inspection. But since manual inspection in reality suffers from the 3 issues in start it is IMO a MUCH better (and much cheaper) approach from helicopter perspective Tirthankar Dutta | Director, Avanade I think code review should be run both before and after check ins. There are some code metrics that are meant to be run on the entire codebase … Also, especially on multi-site projects, one should strive to architect in a way that lets men manage the framework while boys write the repetitive code… scales very well with the need to review less by containment and imposing architectural restrictions to emphasise the design. Bruno Capuano | Microsoft ALM MVP For code reviews (means peer reviews) in distributed team I use http://www.vsanywhere.com/default.aspx  David Jobling | Global Sr. Director, Avanade Peer review is the only way to scale and its a great practice for all in the team to learn to perform and accept. In my experience you soon learn who's code to watch more than others and tune the attention. Mikkel Toudal Kristiansen | Manager, Avanade If you have several branches in your code base, you will need to merge often. This requires manual merging, when a file has been changed in both branches. It offers a good opportunity to actually review to changed code. So my advice is: Merging between branches should be done as often as possible, it should be done by a senior developer, and he/she should perform a full code review of the code being merged. As for detecting architectural smells and code smells creeping into the code base, one really good third party tools exist: Ndepend (http://www.ndepend.com/, for static code analysis of the current state of the code base). You could also consider adding StyleCop to the solution. Jesse Houwing | Visual Studio ALM Ranger I gave a presentation on this subject on the TechDays conference in NL last year. See my presentation and slides here (talk in Dutch, but English presentation): http://blog.jessehouwing.nl/2012/03/did-you-miss-my-techdaysnl-talk-on-code.html  I’d like to add a few more points: - Before/After checking is mostly a trust issue. If you have a team that does diligent peer reviews and regularly talk/sit together or peer review, there’s no need to enforce a before-checkin policy. The peer peer-programming and regular feedback during development can take care of most of the review requirements as long as the team isn’t under stress. - Under stress, enforce pre-checkin reviews, it might sound strange, if you’re already under time or budgetary constraints, but it is under such conditions most real issues start to be created or pile up. - Use tools to catch most common errors, Code Analysis/FxCop was already mentioned. HP Fortify, Resharper, Coderush etc can help you there. There are also a lot of 3rd party rules you can add to Code Analysis. I’ve written a few myself (http://fccopcontrib.codeplex.com) and various teams from Microsoft have added their own rules (MSOCAF for SharePoint, WSSF for WCF). For common errors that keep cropping up, see if you can define a rule. It’s much easier. But more importantly make sure you have a good help page explaining *WHY* it's wrong. If you have small feature or developer branches/shelvesets, you might want to review pre-merge. It’s still better to do peer reviews and peer programming, but the most important thing is that bad quality code doesn’t make it into the important branch. So my philosophy: - Use tooling as much as possible. - Make sure the team understands the tooling and the importance of the things it flags. It’s too easy to just click suppress all to ignore the warnings. - Under stress, tighten process, it’s under stress that the problems of late reviews will really surface - Most importantly if you do reviews do them as early as possible, but never later than needed. In other words, pre-checkin/post checking doesn’t really matter, as long as the review is done before the code is released. It’ll just be much more expensive to fix any review outcomes the later you find them. --- I would love to hear what you think!

    Read the article

  • Very different I/O performance in C++ on Windows

    - by Mr.Gate
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Very different IO performance in C/C++

    - by Roberto Tirabassi
    Hi all, I'm a new user and my english is not so good so I hope to be clear. We're facing a performance problem using large files (1GB or more) expecially (as it seems) when you try to grow them in size. Anyway... to verify our sensations we tryed the following (on Win 7 64Bit, 4core, 8GB Ram, 32 bit code compiled with VC2008) a) Open an unexisting file. Write it from the beginning up to 1Gb in 1Mb slots. Now you have a 1Gb file. Now randomize 10000 positions within that file, seek to that position and write 50 bytes in each position, no matter what you write. Close the file and look at the results. Time to create the file is quite fast (about 0.3"), time to write 10000 times is fast all the same (about 0.03"). Very good, this is the beginnig. Now try something else... b) Open an unexisting file, seek to 1Gb-1byte and write just 1 byte. Now you have another 1Gb file. Follow the next steps exactly same way of case 'a', close the file and look at the results. Time to create the file is the faster you can imagine (about 0.00009") but write time is something you can't believe.... about 90"!!!!! b.1) Open an unexisting file, don't write any byte. Act as before, ramdomizing, seeking and writing, close the file and look at the result. Time to write is long all the same: about 90"!!!!! Ok... this is quite amazing. But there's more! c) Open again the file you crated in case 'a', don't truncate it... randomize again 10000 positions and act as before. You're fast as before, about 0,03" to write 10000 times. This sounds Ok... try another step. d) Now open the file you created in case 'b', don't truncate it... randomize again 10000 positions and act as before. You're slow again and again, but the time is reduced to... 45"!! Maybe, trying again, the time will reduce. I actually wonder why... Any Idea? The following is part of the code I used to test what I told in previuos cases (you'll have to change someting in order to have a clean compilation, I just cut & paste from some source code, sorry). The sample can read and write, in random, ordered or reverse ordered mode, but write only in random order is the clearest test. We tryed using std::fstream but also using directly CreateFile(), WriteFile() and so on the results are the same (even if std::fstream is actually a little slower). Parameters for case 'a' = -f_tempdir_\casea.dat -n10000 -t -p -w Parameters for case 'b' = -f_tempdir_\caseb.dat -n10000 -t -v -w Parameters for case 'b.1' = -f_tempdir_\caseb.dat -n10000 -t -w Parameters for case 'c' = -f_tempdir_\casea.dat -n10000 -w Parameters for case 'd' = -f_tempdir_\caseb.dat -n10000 -w Run the test (and even others) and see... // iotest.cpp : Defines the entry point for the console application. // #include <windows.h> #include <iostream> #include <set> #include <vector> #include "stdafx.h" double RealTime_Microsecs() { LARGE_INTEGER fr = {0, 0}; LARGE_INTEGER ti = {0, 0}; double time = 0.0; QueryPerformanceCounter(&ti); QueryPerformanceFrequency(&fr); time = (double) ti.QuadPart / (double) fr.QuadPart; return time; } int main(int argc, char* argv[]) { std::string sFileName ; size_t stSize, stTimes, stBytes ; int retval = 0 ; char *p = NULL ; char *pPattern = NULL ; char *pReadBuf = NULL ; try { // Default stSize = 1<<30 ; // 1Gb stTimes = 1000 ; stBytes = 50 ; bool bTruncate = false ; bool bPre = false ; bool bPreFast = false ; bool bOrdered = false ; bool bReverse = false ; bool bWriteOnly = false ; // Comsumo i parametri for(int index=1; index < argc; ++index) { if ( '-' != argv[index][0] ) throw ; switch(argv[index][1]) { case 'f': sFileName = argv[index]+2 ; break ; case 's': stSize = xw::str::strtol(argv[index]+2) ; break ; case 'n': stTimes = xw::str::strtol(argv[index]+2) ; break ; case 'b':stBytes = xw::str::strtol(argv[index]+2) ; break ; case 't': bTruncate = true ; break ; case 'p' : bPre = true, bPreFast = false ; break ; case 'v' : bPreFast = true, bPre = false ; break ; case 'o' : bOrdered = true, bReverse = false ; break ; case 'r' : bReverse = true, bOrdered = false ; break ; case 'w' : bWriteOnly = true ; break ; default: throw ; break ; } } if ( sFileName.empty() ) { std::cout << "Usage: -f<File Name> -s<File Size> -n<Number of Reads and Writes> -b<Bytes per Read and Write> -t -p -v -o -r -w" << std::endl ; std::cout << "-t truncates the file, -p pre load the file, -v pre load 'veloce', -o writes in order mode, -r write in reverse order mode, -w Write Only" << std::endl ; std::cout << "Default: 1Gb, 1000 times, 50 bytes" << std::endl ; throw ; } if ( !stSize || !stTimes || !stBytes ) { std::cout << "Invalid Parameters" << std::endl ; return -1 ; } size_t stBestSize = 0x00100000 ; std::fstream fFile ; fFile.open(sFileName.c_str(), std::ios_base::binary|std::ios_base::out|std::ios_base::in|(bTruncate?std::ios_base::trunc:0)) ; p = new char[stBestSize] ; pPattern = new char[stBytes] ; pReadBuf = new char[stBytes] ; memset(p, 0, stBestSize) ; memset(pPattern, (int)(stBytes&0x000000ff), stBytes) ; double dTime = RealTime_Microsecs() ; size_t stCopySize, stSizeToCopy = stSize ; if ( bPre ) { do { stCopySize = std::min(stSizeToCopy, stBestSize) ; fFile.write(p, stCopySize) ; stSizeToCopy -= stCopySize ; } while (stSizeToCopy) ; std::cout << "Creating time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } else if ( bPreFast ) { fFile.seekp(stSize-1) ; fFile.write(p, 1) ; std::cout << "Creating Fast time is: " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; } size_t stPos ; ::srand((unsigned int)dTime) ; double dReadTime, dWriteTime ; stCopySize = stTimes ; std::vector<size_t> inVect ; std::vector<size_t> outVect ; std::set<size_t> outSet ; std::set<size_t> inSet ; // Prepare vector and set do { stPos = (size_t)(::rand()<<16) % stSize ; outVect.push_back(stPos) ; outSet.insert(stPos) ; stPos = (size_t)(::rand()<<16) % stSize ; inVect.push_back(stPos) ; inSet.insert(stPos) ; } while (--stCopySize) ; // Write & read using vectors if ( !bReverse && !bOrdered ) { std::vector<size_t>::iterator outI, inI ; outI = outVect.begin() ; inI = inVect.begin() ; stCopySize = stTimes ; dReadTime = 0.0 ; dWriteTime = 0.0 ; do { dTime = RealTime_Microsecs() ; fFile.seekp(*outI) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++outI ; if ( !bWriteOnly ) { dTime = RealTime_Microsecs() ; fFile.seekg(*inI) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++inI ; } } while (--stCopySize) ; std::cout << "Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " (Ave: " << xw::str::itoa(dWriteTime/stTimes, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { std::cout << "Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " (Ave: " << xw::str::itoa(dReadTime/stTimes, 10, 'f') << ")" << std::endl ; } } // End // Write in order if ( bOrdered ) { std::set<size_t>::iterator i = outSet.begin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.begin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.end(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End // Write in reverse order if ( bReverse ) { std::set<size_t>::reverse_iterator i = outSet.rbegin() ; dWriteTime = 0.0 ; stCopySize = 0 ; for(; i != outSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekp(stPos) ; fFile.write(pPattern, stBytes) ; dWriteTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Write time is " << xw::str::itoa(dWriteTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dWriteTime/stCopySize, 10, 'f') << ")" << std::endl ; if ( !bWriteOnly ) { i = inSet.rbegin() ; dReadTime = 0.0 ; stCopySize = 0 ; for(; i != inSet.rend(); ++i) { stPos = *i ; dTime = RealTime_Microsecs() ; fFile.seekg(stPos) ; fFile.read(pReadBuf, stBytes) ; dReadTime += RealTime_Microsecs() - dTime ; ++stCopySize ; } std::cout << "Reverse ordered Read time is " << xw::str::itoa(dReadTime, 5, 'f') << " in " << xw::str::itoa(stCopySize) << " (Ave: " << xw::str::itoa(dReadTime/stCopySize, 10, 'f') << ")" << std::endl ; } }// End dTime = RealTime_Microsecs() ; fFile.close() ; std::cout << "Flush/Close Time is " << xw::str::itoa(RealTime_Microsecs()-dTime, 5, 'f') << std::endl ; std::cout << "Program Terminated" << std::endl ; } catch(...) { std::cout << "Something wrong or wrong parameters" << std::endl ; retval = -1 ; } if ( p ) delete []p ; if ( pPattern ) delete []pPattern ; if ( pReadBuf ) delete []pReadBuf ; return retval ; }

    Read the article

  • Ajax Control Toolkit Now Supports jQuery

    - by Stephen.Walther
    I’m excited to announce the September 2013 release of the Ajax Control Toolkit, which now supports building new Ajax Control Toolkit controls with jQuery. You can download the latest release of the Ajax Control Toolkit from http://AjaxControlToolkit.CodePlex.com or you can install the Ajax Control Toolkit directly within Visual Studio by executing the following NuGet command: The New jQuery Extender Base Class This release of the Ajax Control Toolkit introduces a new jQueryExtender base class. This new base class enables you to create Ajax Control Toolkit controls with jQuery instead of the Microsoft Ajax Library. Currently, only one control in the Ajax Control Toolkit has been rewritten to use the new jQueryExtender base class (only one control has been jQueryized). The ToggleButton control is the first of the Ajax Control Toolkit controls to undergo this dramatic transformation. All of the other controls in the Ajax Control Toolkit are written using the Microsoft Ajax Library. We hope to gradually rewrite these controls as jQuery controls over time. You can view the new jQuery ToggleButton live at the Ajax Control Toolkit sample site: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToggleButton/ToggleButton.aspx Why are we rewriting Ajax Control Toolkits with jQuery? There are very few developers actively working with the Microsoft Ajax Library while there are thousands of developers actively working with jQuery. Because we want talented developers in the community to continue to contribute to the Ajax Control Toolkit, and because almost all JavaScript developers are familiar with jQuery, it makes sense to support jQuery with the Ajax Control Toolkit. Also, we believe that the Ajax Control Toolkit is a great framework for Web Forms developers who want to build new ASP.NET controls that use JavaScript. The Ajax Control Toolkit has great features such as automatic bundling, minification, caching, and compression. We want to make it easy for ASP.NET developers to build new controls that take advantage of these features. Instantiating Controls with data-* Attributes We took advantage of the new JQueryExtender base class to change the way that Ajax Control Toolkit controls are instantiated. In the past, adding an Ajax Control Toolkit to a page resulted in inline JavaScript being injected into the page. For example, adding the ToggleButton control to a page injected the following HTML and script: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" /> <script type="text/javascript"> //<![CDATA[ Sys.Application.add_init(function() { $create(Sys.Extended.UI.ToggleButtonBehavior, {"CheckedImageAlternateText":"Check", "CheckedImageUrl":"ToggleButton_Checked.gif", "ImageHeight":19, "ImageWidth":19, "UncheckedImageAlternateText":"UnCheck", "UncheckedImageUrl":"ToggleButton_Unchecked.gif", "id":"ctl00_SampleContent_ToggleButtonExtender1"}, null, null, $get("ctl00_SampleContent_CheckBox1")); }); //]]> </script> Notice the call to the JavaScript $create() method at the bottom of the page. When using the Microsoft Ajax Library, this call to the $create() method is necessary to create the Ajax Control Toolkit control. This inline script looks pretty ugly to a modern JavaScript developer. Inline script! Horrible! The jQuery version of the ToggleButton injects the following HTML and script into the page: <input id="ctl00_SampleContent_CheckBox1" name="ctl00$SampleContent$CheckBox1" type="checkbox" checked="checked" data-act-togglebuttonextender="imageWidth:19, imageHeight:19, uncheckedImageUrl:'ToggleButton_Unchecked.gif', checkedImageUrl:'ToggleButton_Checked.gif', uncheckedImageAlternateText:'I don&#39;t understand why you don&#39;t like ASP.NET', checkedImageAlternateText:'It&#39;s really nice to hear from you that you like ASP.NET'" /> Notice that there is no script! There is no call to the $create() method. In fact, there is no inline JavaScript at all. The jQuery version of the ToggleButton uses an HTML5 data-* attribute instead of an inline script. The ToggleButton control is instantiated with a data-act-togglebuttonextender attribute. Using data-* attributes results in much cleaner markup (You don’t need to feel embarrassed when selecting View Source in your browser). Ajax Control Toolkit versus jQuery So in a jQuery world why is the Ajax Control Toolkit needed at all? Why not just use jQuery plugins instead of the Ajax Control Toolkit? For example, there are lots of jQuery ToggleButton plugins floating around the Internet. Why not just use one of these jQuery plugins instead of using the Ajax Control Toolkit ToggleButton control? There are three main reasons why the Ajax Control Toolkit continues to be valuable in a jQuery world: Ajax Control Toolkit controls run on both the server and client jQuery plugins are client only. A jQuery plugin does not include any server-side code. If you need to perform any work on the server – think of the AjaxFileUpload control – then you can’t use a pure jQuery solution. Ajax Control Toolkit controls provide a better Visual Studio experience You don’t get any design time experience when you use jQuery plugins within Visual Studio. Ajax Control Toolkit controls, on the other hand, are designed to work with Visual Studio. For example, you can use the Visual Studio Properties window to set Ajax Control Toolkit control properties. Ajax Control Toolkit controls shield you from working with JavaScript I like writing code in JavaScript. However, not all developers like JavaScript and some developers want to completely avoid writing any JavaScript code at all. The Ajax Control Toolkit enables you to take advantage of JavaScript (and the latest features of HTML5) in your ASP.NET Web Forms websites without writing a single line of JavaScript. Better ToolkitScriptManager Documentation With this release, we have added more detailed documentation for using the ToolkitScriptManager. In particular, we added documentation that describes how to take advantage of the new bundling, minification, compression, and caching features of the Ajax Control Toolkit. The ToolkitScriptManager documentation is part of the Ajax Control Toolkit sample site and it can be read here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/ToolkitScriptManager/ToolkitScriptManager.aspx Other Fixes This release of the Ajax Control Toolkit includes several important bug fixes. For example, the Ajax Control Toolkit Twitter control was completely rewritten with this release. Twitter is in the process of retiring the first version of their API. You can read about their plans here: https://dev.twitter.com/blog/planning-for-api-v1-retirement We completely rewrote the Ajax Control Toolkit Twitter control to use the new Twitter API. To take advantage of the new Twitter API, you must get a key and access token from Twitter and add the key and token to your web.config file. Detailed instructions for using the new version of the Ajax Control Toolkit Twitter control can be found here: http://www.asp.net/ajaxLibrary/AjaxControlToolkitSampleSite/Twitter/Twitter.aspx   Summary We’ve made some really great changes to the Ajax Control Toolkit over the last two releases to modernize the toolkit. In the previous release, we updated the Ajax Control Toolkit to use a better bundling, minification, compression, and caching system. With this release, we updated the Ajax Control Toolkit to support jQuery. We also continue to update the Ajax Control Toolkit with important bug fixes. I hope you like these changes and I look forward to hearing your feedback.

    Read the article

  • Using Oracle ADF Data Visualization Tools (DVT) Line Graphs to Display Weather Information

    - by Christian David Straub
    OverviewA guest post by Jeanne Waldman.I have a simple JDeveloper Fusion application that retrieves weather data. I wanted to compare the week's temperatures of different locations in a graph. I decided to check out the dvt:lineGraph component, and it took me a few minutes to add it to my jspx page and supply it with data.Drag and Drop the dvt:lineGraph onto your pageI opened my .jspx page in design modeIn the Component Palette, I selected ADF Data Visualization.Then I dragged 'Line' onto my page.A dialog popped up giving me options of the type of line graph. I chose the default.A lineGraph displayed with some default data. Hook up your weather dataNow I wanted to hook up my own data. I browsed the tagdoc, and I found the tabularData attribute.Attribute: tabularDataType: java.util.ListTagDoc:Specifies a list of data that the graph uses to create a grid and populate itself. The List consists of a three-member Object array for each data value to be passed to the graph. The members of each array must be organized as follows: The first member (index 0) is the column label, in the grid, of the data value. This is generally a String. If the graph has a time axis, then this should be a Java Date. Column labels typically identify groups in the graph. The second member (index 1) is the row label, in the grid, of the data value. This is generally a String. Row labels appear as series labels in the graph (usually in the legend). The third member (index 2) is the data value, which is usually a Double.The first member is the column label of the data value. This would be the day of the week.The second member is the row label of the data value. This would be the location name.The third member is the data value, usually a Double. This would be the temperature. I already had all this information, I just needed to put it in a List with a three-member Object array for each data value.   /**    * This is used for the lineGraph to show the data for each location.    */   public List<Object[]> getTabularData()   {      List<Object[]> tabularData = new ArrayList<Object []>();      List<WeatherForecast> weatherForecastList = getWeatherForecastList();      // loop through the list and build up the tabular data. Then cache it.      for(WeatherForecast wf : weatherForecastList)      {        List<ForecastDay> forecastDayList = wf.getForecastDayList();        String location = wf.getLocation();        for (ForecastDay fday : forecastDayList)        {          String day = fday.getPrettyDate();          String highTemp = fday.getHighF();          tabularData.add(new Object[]{day, location, Double.valueOf(highTemp)});        }             }      return tabularData;    }  Now I bound the lineGraph to this method by setting tabularData to#{weatherForAllLocationsBean.tabularData}weatherForAllLocationsBean is my bean that is defined in faces-config.xml. Adding a barGraphIn about 30 seconds, I added a barGraph with the same data. I dragged and dropped a bar graph onto the page, used the same tabularData as I did in the line graph. The page looks like this:  ConclusionI was very happy how fast it was to hook up my weather data to these graphs. They look great, and they have built in functionality. For instance, I can hide/show a location by clicking on the name of the location in the legend.

    Read the article

  • XNA Notes 011

    - by George Clingerman
    Even with a lot of the XNA community working on Dream Build Play entries ( I swear I’m going to finish mine this year!) people are still finding time to do side projects and be amazingly active in the XNA and XBLIG community. With my one eye on my code and one eye on the community, here’s what I noticed these over achievers doing this past week! Time Critical XNA News: Xbox LIVE Indie Games sales data will be delayed March 17-20th due to some schedule maintenance http://create.msdn.com/en-us/news/indie_games_data_delay_march2011 GameMarx is releasing a series of videos to help raise donations for victims of the earthquakes and tsunami in Japan. Help out if you can! http://www.gamemarx.com/video/special/29/help-japan-sushido.aspx XNA MVPs: Catalin Zima shares his thoughts on the MVP summit and my book! http://www.catalinzima.com/2011/03/mvp-summit-2011/ Glenn Wilson (@mykre) helps the XNA team announce some new educational content that you don’t want to miss if you’re porting your app or game to Windows Phone 7 http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/653/Porting-your-App-or-Game-to-Windows-Phone-7.aspx and Windows Phone 7 from scratch http://www.virtualrealm.com.au/Blog/tabid/62/EntryId/654/Windows-Phone-from-Scratch.aspx and shares a link to some free architectural models and textures http://twitter.com/#!/Mykre/status/46410160784158720 George (that’s me!) shares his MVP Summit 2011 summary and XBLIG thoughts http://geekswithblogs.net/clingermangw/archive/2011/03/15/144366.aspx XNA Developers: @SmallCaveGames shares a Code of Ethics for Xbox LIVE Indie Game Developers http://smallcavegames.blogspot.com/2011/03/unofficial-xblig-developers-code-of.html Derek S adds more Xbox LIVE Indie Game studios to his master list of XBLIG links http://twitter.com/#!/Mr_Deeke/status/46140996056125440 http://xbl-indieverse.blogspot.com/p/xblig-links.html Making games and want to help kids? Then share your story with GameFace: America! http://gameitupinitiative.com/about-the-initiative/programs/gameface-america/ Xbox LIVE Indie Games (XBLIG): XonaGames shares some video footage of their booth from GDC 2011 Video 1: http://youtu.be/lxIV9nk3Gq4 Video 2: http://youtu.be/GgfrjqkxR_o Video 3: http://youtu.be/yVcpXrTX7SQ Joystiq on Mommy’s Best Games Serious Sam Double D http://www.joystiq.com/2011/03/16/the-most-important-thing-about-serious-sam-double-d/ And The Escapist recommends that gamers start learning to avoid cleavage now http://www.escapistmagazine.com/news/view/108543-Boobie-Bomber-Makes-First-Appearance-in-Serious-Sam-Double-D Magiko Gaming started a blog on the XBLIG dashboard daily Top 10 games in the US. Good way to go back in time and look at the history of which games were in the the Top 10. http://dailytop10indiegames.wordpress.com/ Where are they going now? XBLIG developers at a crossroads.. http://www.gamesetwatch.com/2011/03/where_are_they_going_now_xblig.php http://www.gamasutra.com/view/news/33527/InDepth_Where_Are_They_Going_Now_XBLIG_Developers_At_A_Crossroads_.php BinaryTweed’s Clover: A Curious Tail is Xbox LIVE’s Deal of the Week! http://www.armlessoctopus.com/2011/03/15/what-luck-clover-a-curious-tale-is-half-price-this-week/ Looking for an Xbox LIVE Indie Game to buy? Writings of Mass Deduction has over 125 suggestions at this point! http://writingsofmassdeduction.com/ SkaStudios shares Vampire Smile Achievements AND their PAX East 2011 Both Setup video http://www.ska-studios.com/2011/03/14/vampire-smile-achievement/ http://www.ska-studios.com/2011/03/15/pax-booth-setup-time-lapse/ MasterBlud and VVGTV starts a new community for XBLIG developers and gamers to join http://vvgtv.forumotion.com/ Raymond Matthews (@DrakstarMatryx) covers Mommy’s Best Games getting Serious http://www.darkstarmatryx.com/?p=286 XNA Development: Dave Henry (@mort8088) posts the 4th tutorial in his series XNA 4.0 SpriteBatch extended http://mort8088.com/2011/03/11/xna-4-0-tutorial-4-spritebatch-extended/ Tutorial 5 - Creating a manual blank texture http://mort8088.com/2011/03/13/xna-4-tutorial-5-manual-blank-texture/ XNA 4.0 Tutorial 6 - Spritesheet Object http://mort8088.com/2011/03/18/xna-4-0-tutorial-6-spritesheet-object/ Jason Mitchell shares a tutorial on setting the alpha value for spritebatch in XNA 4.0 http://www.jason-mitchell.com/index.php/2011/03/13/setting-alpha-value-for-spritebatch-draw-in-xna-4/ XNA for Silverlight Developers: Part 7 - Collision Detection http://www.silverlightshow.net/items/XNA-for-Silverlight-developers-Part-7-Collision-detection.aspx Markus Ewald (@Cygon4) shares the full Ninject 2.0 binding for XNA and Sunburn http://twitter.com/#!/Cygon4/status/48330203826622464 Michael B. McLaughlin shares an AccelerometerInput XNA GameComponent he created (which I’m probably going to snag for a game I’m working on...) http://geekswithblogs.net/mikebmcl/archive/2011/03/17/accelerometerinput-xna-gamecomponent.aspx Extra Credit tackles the building of a good tutorial. Must watch for all Indie game devs (thanks for pointing it out Evan Johnson!) http://twitter.com/#!/johnsonevan/status/48452115680604160 http://www.escapistmagazine.com/videos/view/extra-credits/2921-Tutorials-101 ExEn is fully funded at this point so definitely something for XBLIG developers to keep an eye on as they consider releasing their games on other platforms http://rockethub.com/projects/752-exen-xna-for-iphone-android-and-silverlight Channel 9 and Greg Duncan post Mixing the Game State Management and Platformer XNA Recipes http://channel9.msdn.com/coding4fun/blog/Mixing-the-Game-State-Management-and-Platformer-XNA-Recipes Sgt. Conker has noticed Mike McLaughlin has been crazy productive and has done a recap of his recent posts http://www.sgtconker.com/2011/03/recap-of-mikebmcls-posts/

    Read the article

  • Updating a SharePoint master page via a solution (WSP)

    - by Kelly Jones
    In my last blog post, I wrote how to deploy a SharePoint theme using Features and a solution package.  As promised in that post, here is how to update an already deployed master page. There are several ways to update a master page in SharePoint.  You could upload a new version to the master page gallery, or you could upload a new master page to the gallery, and then set the site to use this new page.  Manually uploading your master page to the master page gallery might be the best option, depending on your environment.  For my client, I did these steps in code, which is what they preferred. (Image courtesy of: http://www.joiningdots.net/blog/2007/08/sharepoint-and-quick-launch.html ) Before you decide which method you need to use, take a look at your existing pages.  Are they using the SharePoint dynamic token or the static token for the master page reference?  The wha, huh? SO, there are four ways to tell an .aspx page hosted in SharePoint which master page it should use: “~masterurl/default.master” – tells the page to use the default.master property of the site “~masterurl/custom.master” – tells the page to use the custom.master property of the site “~site/default.master” – tells the page to use the file named “default.master” in the site’s master page gallery “~sitecollection/default.master” – tells the page to use the file named “default.master” in the site collection’s master page gallery For more information about these tokens, take a look at this article on MSDN. Once you determine which token your existing pages are pointed to, then you know which file you need to update.  So, if the ~masterurl tokens are used, then you upload a new master page, either replacing the existing one or adding another one to the gallery.  If you’ve uploaded a new file with a new name, you’ll just need to set it as the master page either through the UI (MOSS only) or through code (MOSS or WSS Feature receiver code – or using SharePoint Designer). If the ~site or ~sitecollection tokens were used, then you’re limited to either replacing the existing master page, or editing all of your existing pages to point to another master page.  In most cases, it probably makes sense to just replace the master page. For my project, I’m working with WSS and the existing pages are set to the ~sitecollection token.  Based on this, I decided to just upload a new version of the existing master page (and not modify the dozens of existing pages). Also, since my client prefers Features and solutions, I created a master page Feature and a corresponding Feature Receiver.  For information on creating the elements and feature files, check out this post: http://sharepointmagazine.net/technical/development/deploying-the-master-page . This works fine, unless you are overwriting an existing master page, which was my case.  You’ll run into errors because the master page file needs to be checked out, replaced, and then checked in.  I wrote code in my Feature Activated event handler to accomplish these steps. Here are the steps necessary in code: Get the file name from the elements file of the Feature Check out the file from the master page gallery Upload the file to the master page gallery Check in the file to the master page gallery Here’s the code in my Feature Receiver: 1: public override void FeatureActivated(SPFeatureReceiverProperties properties) 2: { 3: try 4: { 5:   6: SPElementDefinitionCollection col = properties.Definition.GetElementDefinitions(System.Globalization.CultureInfo.CurrentCulture); 7:   8: using (SPWeb curweb = GetCurWeb(properties)) 9: { 10: foreach (SPElementDefinition ele in col) 11: { 12: if (string.Compare(ele.ElementType, "Module", true) == 0) 13: { 14: // <Module Name="DefaultMasterPage" List="116" Url="_catalogs/masterpage" RootWebOnly="FALSE"> 15: // <File Url="myMaster.master" Type="GhostableInLibrary" IgnoreIfAlreadyExists="TRUE" 16: // Path="MasterPages/myMaster.master" /> 17: // </Module> 18: string Url = ele.XmlDefinition.Attributes["Url"].Value; 19: foreach (System.Xml.XmlNode file in ele.XmlDefinition.ChildNodes) 20: { 21: string Url2 = file.Attributes["Url"].Value; 22: string Path = file.Attributes["Path"].Value; 23: string fileType = file.Attributes["Type"].Value; 24:   25: if (string.Compare(fileType, "GhostableInLibrary", true) == 0) 26: { 27: //Check out file in document library 28: SPFile existingFile = curweb.GetFile(Url + "/" + Url2); 29:   30: if (existingFile != null) 31: { 32: if (existingFile.CheckOutStatus != SPFile.SPCheckOutStatus.None) 33: { 34: throw new Exception("The master page file is already checked out. Please make sure the master page file is checked in, before activating this feature."); 35: } 36: else 37: { 38: existingFile.CheckOut(); 39: existingFile.Update(); 40: } 41: } 42:   43: //Upload file to document library 44: string filePath = System.IO.Path.Combine(properties.Definition.RootDirectory, Path); 45: string fileName = System.IO.Path.GetFileName(filePath); 46: char slash = Convert.ToChar("/"); 47: string[] folders = existingFile.ParentFolder.Url.Split(slash); 48:   49: if (folders.Length > 2) 50: { 51: Logger.logMessage("More than two folders were detected in the library path for the master page. Only two are supported.", 52: Logger.LogEntryType.Information); //custom logging component 53: } 54:   55: SPFolder myLibrary = curweb.Folders[folders[0]].SubFolders[folders[1]]; 56:   57: FileStream fs = File.OpenRead(filePath); 58:   59: SPFile newFile = myLibrary.Files.Add(fileName, fs, true); 60:   61: myLibrary.Update(); 62: newFile.CheckIn("Updated by Feature", SPCheckinType.MajorCheckIn); 63: newFile.Update(); 64: } 65: } 66: } 67: } 68: } 69: } 70: catch (Exception ex) 71: { 72: string msg = "Error occurred during feature activation"; 73: Logger.logException(ex, msg, ""); 74: } 75:   76: } 77:   78: /// <summary> 79: /// Using a Feature's properties, get a reference to the Current Web 80: /// </summary> 81: /// <param name="properties"></param> 82: public SPWeb GetCurWeb(SPFeatureReceiverProperties properties) 83: { 84: SPWeb curweb; 85:   86: //Check if the parent of the web is a site or a web 87: if (properties != null && properties.Feature.Parent.GetType().ToString() == "Microsoft.SharePoint.SPWeb") 88: { 89:   90: //Get web from parent 91: curweb = (SPWeb)properties.Feature.Parent; 92: 93: } 94: else 95: { 96: //Get web from Site 97: using (SPSite cursite = (SPSite)properties.Feature.Parent) 98: { 99: curweb = (SPWeb)cursite.OpenWeb(); 100: } 101: } 102:   103: return curweb; 104: } This did the trick.  It allowed me to update my existing master page, through an easily repeatable process (which is great when you are working with more than one environment and what to do things like TEST it!).  I did run into what I would classify as a strange issue with one of my subsites, but that’s the topic for another blog post.

    Read the article

  • C++ Little Wonders: The C++11 auto keyword redux

    - by James Michael Hare
    I’ve decided to create a sub-series of my Little Wonders posts to focus on C++.  Just like their C# counterparts, these posts will focus on those features of the C++ language that can help improve code by making it easier to write and maintain.  The index of the C# Little Wonders can be found here. This has been a busy week with a rollout of some new website features here at my work, so I don’t have a big post for this week.  But I wanted to write something up, and since lately I’ve been renewing my C++ skills in a separate project, it seemed like a good opportunity to start a C++ Little Wonders series.  Most of my development work still tends to focus on C#, but it was great to get back into the saddle and renew my C++ knowledge.  Today I’m going to focus on a new feature in C++11 (formerly known as C++0x, which is a major move forward in the C++ language standard).  While this small keyword can seem so trivial, I feel it is a big step forward in improving readability in C++ programs. The auto keyword If you’ve worked on C++ for a long time, you probably have some passing familiarity with the old auto keyword as one of those rarely used C++ keywords that was almost never used because it was the default. That is, in the code below (before C++11): 1: int foo() 2: { 3: // automatic variables (allocated and deallocated on stack) 4: int x; 5: auto int y; 6:  7: // static variables (retain their value across calls) 8: static int z; 9:  10: return 0; 11: } The variable x is assumed to be auto because that is the default, thus it is unnecessary to specify it explicitly as in the declaration of y below that.  Basically, an auto variable is one that is allocated and de-allocated on the stack automatically.  Contrast this to static variables, that are allocated statically and exist across the lifetime of the program. Because auto was so rarely (if ever) used since it is the norm, they decided to remove it for this purpose and give it new meaning in C++11.  The new meaning of auto: implicit typing Now, if your compiler supports C++ 11 (or at least a good subset of C++11 or 0x) you can take advantage of type inference in C++.  For those of you from the C# world, this means that the auto keyword in C++ now behaves a lot like the var keyword in C#! For example, many of us have had to declare those massive type declarations for an iterator before.  Let’s say we have a std::map of std::string to int which will map names to ages: 1: std::map<std::string, int> myMap; And then let’s say we want to find the age of a given person: 1: // Egad that's a long type... 2: std::map<std::string, int>::const_iterator pos = myMap.find(targetName); Notice that big ugly type definition to declare variable pos?  Sure, we could shorten this by creating a typedef of our specific map type if we wanted, but now with the auto keyword there’s no need: 1: // much shorter! 2: auto pos = myMap.find(targetName); The auto now tells the compiler to determine what type pos should be based on what it’s being assigned to.  This is not dynamic typing, it still determines the type as if it were explicitly declared and once declared that type cannot be changed.  That is, this is invalid: 1: // x is type int 2: auto x = 42; 3:  4: // can't assign string to int 5: x = "Hello"; Once the compiler determines x is type int it is exactly as if we typed int x = 42; instead, so don’t' confuse it with dynamic typing, it’s still very type-safe. An interesting feature of the auto keyword is that you can modify the inferred type: 1: // declare method that returns int* 2: int* GetPointer(); 3:  4: // p1 is int*, auto inferred type is int 5: auto *p1 = GetPointer(); 6:  7: // ps is int*, auto inferred type is int* 8: auto p2 = GetPointer(); Notice in both of these cases, p1 and p2 are determined to be int* but in each case the inferred type was different.  because we declared p1 as auto *p1 and GetPointer() returns int*, it inferred the type int was needed to complete the declaration.  In the second case, however, we declared p2 as auto p2 which means the inferred type was int*.  Ultimately, this make p1 and p2 the same type, but which type is inferred makes a difference, if you are chaining multiple inferred declarations together.  In these cases, the inferred type of each must match the first: 1: // Type inferred is int 2: // p1 is int* 3: // p2 is int 4: // p3 is int& 5: auto *p1 = GetPointer(), p2 = 42, &p3 = p2; Note that this works because the inferred type was int, if the inferred type was int* instead: 1: // syntax error, p1 was inferred to be int* so p2 and p3 don't make sense 2: auto p1 = GetPointer(), p2 = 42, &p3 = p2; You could also use const or static to modify the inferred type: 1: // inferred type is an int, theAnswer is a const int 2: const auto theAnswer = 42; 3:  4: // inferred type is double, Pi is a static double 5: static auto Pi = 3.1415927; Thus in the examples above it inferred the types int and double respectively, which were then modified to const and static. Summary The auto keyword has gotten new life in C++11 to allow you to infer the type of a variable from it’s initialization.  This simple little keyword can be used to cut down large declarations for complex types into a much more readable form, where appropriate.   Technorati Tags: C++, C++11, Little Wonders, auto

    Read the article

  • Binding data to subgrid

    - by bhargav
    i have a jqgrid with a subgrid...the databinding is done in javascript like this <script language="javascript" type="text/javascript"> var x = screen.width; $(document).ready(function () { $("#projgrid").jqGrid({ mtype: 'POST', datatype: function (pdata) { getData(pdata); }, colNames: ['Project ID', 'Due Date', 'Project Name', 'SalesRep', 'Organization:', 'Status', 'Active Value', 'Delete'], colModel: [ { name: 'Project ID', index: 'project_id', width: 12, align: 'left', key: true }, { name: 'Due Date', index: 'project_date_display', width: 15, align: 'left' }, { name: 'Project Name', index: 'project_title', width: 60, align: 'left' }, { name: 'SalesRep', index: 'Salesrep', width: 22, align: 'left' }, { name: 'Organization:', index: 'customer_company_name:', width: 56, align: 'left' }, { name: 'Status', index: 'Status', align: 'left', width: 15 }, { name: 'Active Value', index: 'Active Value', align: 'left', width: 10 }, { name: 'Delete', index: 'Delete', align: 'left', width: 10}], pager: '#proj_pager', rowList: [10, 20, 50], sortname: 'project_id', sortorder: 'asc', rowNum: 10, loadtext: "Loading....", subGrid: true, shrinkToFit: true, emptyrecords: "No records to view", width: x - 100, height: "100%", rownumbers: true, caption: 'Projects', subGridRowExpanded: function (subgrid_id, row_id) { var subgrid_table_id, pager_id; subgrid_table_id = subgrid_id + "_t"; pager_id = "p_" + subgrid_table_id; $("#" + subgrid_id).html("<table id='" + subgrid_table_id + "' class='scroll'></table><div id='" + pager_id + "' class='scroll'></div>"); jQuery("#" + subgrid_table_id).jqGrid({ mtype: 'POST', postData: { entityIndex: function () { return row_id } }, datatype: function (pdata) { getactionData(pdata); }, height: "100%", colNames: ['Event ID', 'Priority', 'Deadline', 'From Date', 'Title', 'Status', 'Hours', 'Contact From', 'Contact To'], colModel: [ { name: 'Event ID', index: 'Event ID' }, { name: 'Priority', index: 'IssueCode' }, { name: 'Deadline', index: 'IssueTitle' }, { name: 'From Date', index: 'From Date' }, { name: 'Title', index: 'Title' }, { name: 'Status', index: 'Status' }, { name: 'Hours', index: 'Hours' }, { name: 'Contact From', index: 'Contact From' }, { name: 'Contact To', index: 'Contact To' } ], caption: "Action Details", rowNum: 10, pager: '#actionpager', rowList: [10, 20, 30, 50], sortname: 'Event ID', sortorder: "desc", loadtext: "Loading....", shrinkToFit: true, emptyrecords: "No records to view", rownumbers: true, ondblClickRow: function (rowid) { } }); jQuery("#actiongrid").jqGrid('navGrid', '#actionpager', { edit: false, add: false, del: false, search: false }); } }); jQuery("#projgrid").jqGrid('navGrid', '#proj_pager', { edit: false, add: false, del: false, excel: true, search: false }); }); function getactionData(pdata) { var project_id = pdata.entityIndex(); var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var checkOnlyContact = document.getElementById('ctl00_ContentPlaceHolder2_chkOnlyContact').checked; var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pdata.rows; var npage = pdata.page; var sortindex = pdata.sidx; var sortdir = pdata.sord; var path = "project_brow.aspx/GetActionDetails" $.ajax({ type: "POST", url: path, data: "{'project_id': '" + project_id + "','ChannelContact': '" + ChannelContact + "','HideCompleted': '" + HideCompleted + "','Scm': '" + Scm + "','checkOnlyContact': '" + checkOnlyContact + "','MerchantId': '" + MerchantId + "','nrows': '" + nrows + "','npage': '" + npage + "','sortindex': '" + sortindex + "','sortdir': '" + sortdir + "'}", contentType: "application/json; charset=utf-8", success: function (data, textStatus) { if (textStatus == "success") obj = jQuery.parseJSON(data.d) ReceivedData(obj); }, error: function (data, textStatus) { alert('An error has occured retrieving data!'); } }); } function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } function getData(pData) { var dtDateFrom = document.getElementById('ctl00_ContentPlaceHolder2_dtDateFrom_textBox').value; var dtDateTo = document.getElementById('ctl00_ContentPlaceHolder2_dtDateTo_textBox').value; var Status = document.getElementById('ctl00_ContentPlaceHolder2_ddlStatus').value; var Type = document.getElementById('ctl00_ContentPlaceHolder2_ddlType').value; var Channel = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannel').value; var ChannelContact = document.getElementById('ctl00_ContentPlaceHolder2_ddlChannelContact').value; var Customers = document.getElementById('ctl00_ContentPlaceHolder2_txtCustomers').value; var KeywordSearch = document.getElementById('ctl00_ContentPlaceHolder2_txtKeywordSearch').value; var Scm = document.getElementById('ctl00_ContentPlaceHolder2_chkScm').checked; var HideCompleted = document.getElementById('ctl00_ContentPlaceHolder2_chkHideCompleted').checked; var SelectedCustomerId = document.getElementById("<%=hdnSelectedCustomerId.ClientID %>").value var MerchantId = document.getElementById('ctl00_ContentPlaceHolder2_ucProjectDetail_hidden_MerchantId').value; var nrows = pData.rows; var npage = pData.page; var sortindex = pData.sidx; var sortdir = pData.sord; PageMethods.GetProjectDetails(SelectedCustomerId, Customers, KeywordSearch, MerchantId, Channel, Status, Type, dtDateTo, dtDateFrom, ChannelContact, HideCompleted, Scm, nrows, npage, sortindex, sortdir, AjaxSucceeded, AjaxFailed); } function AjaxSucceeded(data) { var obj = jQuery.parseJSON(data) if (obj != null) { if (obj.records!="") { ReceivedClientData(obj); } else { alert('No Data Available to Display') } } } function AjaxFailed(data) { alert('An error has occured retrieving data!'); } function ReceivedClientData(data) { var thegrid = jQuery("#projgrid")[0]; thegrid.addJSONData(data); } </script> as u can see projgrid is my parent grid and action grid is my subgrid to be shown onclicking the '+' symbol Projgrid is binded and being displayed but when it comes to subgrid im able to get the data but the problem comes at the time of binding data to subgrid which is done in function named ReceivedData where you can see like this function ReceivedData(data) { var thegrid = jQuery("#actiongrid")[0]; thegrid.addJSONData(data); } "data" is what i wanted exactly but it cannot be binded to actiongrid which is the subgrid Thanx in advance for help

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Issues with ILMerge, Lambda Expressions and VS2010 merging?

    - by John Blumenauer
    A little Background For quite some time now, it’s been possible to merge multiple .NET assemblies into a single assembly using ILMerge in Visual Studio 2008.  This is especially helpful when writing wrapper assemblies for 3rd-party libraries where it’s desirable to minimize the number of assemblies for distribution.  During the merge process, ILMerge will take a set of assemblies and merge them into a single assembly.  The resulting assembly can be either an executable or a DLL and is identified as the primary assembly. Issue During a recent project, I discovered using ILMerge to merge assemblies containing lambda expressions in Visual Studio 2010 is resulting in invalid primary assemblies.  The code below is not where the initial issue was identified, I will merely use it to illustrate the problem at hand. In order to describe the issue, I created a console application and a class library for calculating a few math functions utilizing lambda expressions.  The code is available for download at the bottom of this blog entry. MathLib.cs using System; namespace MathLib { public static class MathHelpers { public static Func<double, double, double> Hypotenuse = (x, y) => Math.Sqrt(x * x + y * y); static readonly Func<int, int, bool> divisibleBy = (int a, int b) => a % b == 0; public static bool IsPrimeNumber(int x) { { for (int i = 2; i <= x / 2; i++) if (divisibleBy(x, i)) return false; return true; }; } } } Program.cs using System; using MathLib; namespace ILMergeLambdasConsole { class Program { static void Main(string[] args) { int n = 19; if (MathHelpers.IsPrimeNumber(n)) { Console.WriteLine(n + " is prime"); } else { Console.WriteLine(n + " is not prime"); } Console.ReadLine(); } } } Not surprisingly, the preceding code compiles, builds and executes without error prior to running the ILMerge tool.   ILMerge Setup In order to utilize ILMerge, the following changes were made to the project. The MathLib.dll assembly was built in release configuration and copied to the MathLib folder.  The following folder hierarchy was used for this example:   The project file for ILMergeLambdasConsole project file was edited to add the ILMerge post-build configuration.  The following lines were added near the bottom of the project file:  <Target Name="AfterBuild" Condition="'$(Configuration)' == 'Release'"> <Exec Command="&quot;..\..\lib\ILMerge\Ilmerge.exe&quot; /ndebug /out:@(MainAssembly) &quot;@(IntermediateAssembly)&quot; @(ReferenceCopyLocalPaths->'&quot;%(FullPath)&quot;', ' ')" /> <Delete Files="@(ReferenceCopyLocalPaths->'$(OutDir)%(DestinationSubDirectory)%(Filename)%(Extension)')" /> </Target> The ILMergeLambdasConsole project was modified to reference the MathLib.dll located in the MathLib folder above. ILMerge and ILMerge.exe.config was copied into the ILMerge folder shown above.  The contents of ILMerge.exe.config are: <?xml version="1.0" encoding="utf-8" ?> <configuration> <startup useLegacyV2RuntimeActivationPolicy="true"> <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> </startup> </configuration> Post-ILMerge After compiling and building, the MathLib.dll assembly will be merged into the ILMergeLambdasConsole executable.  Unfortunately, executing ILMergeLambdasConsole.exe now results in a crash.  The ILMerge documentation recommends using PEVerify.exe to validate assemblies after merging.  Executing PEVerify.exe against the ILMergeLambdasConsole.exe assembly results in the following error:    Further investigation by using Reflector reveals the divisibleBy method in the MathHelpers class looks a bit questionable after the merge.     Prior to using ILMerge, the same divisibleBy method appeared as the following in Reflector: It’s pretty obvious something has gone awry during the merge process.  However, this is only occurring when building within the Visual Studio 2010 environment.  The same code and configuration built within Visual Studio 2008 executes fine.  I’m still investigating the issue.  If anyone has already experienced this situation and solved it, I would love to hear from you.  However, as of right now, it looks like something has gone terribly wrong when executing ILMerge against assemblies containing Lambdas in Visual Studio 2010. Solution Files ILMergeLambdaExpression

    Read the article

  • create new inbox folder and save emails

    - by kasunmit
    i am trying http://www.c-sharpcorner.com/uploadfile/rambab/outlookintegration10282006032802am/outlookintegration.aspx[^] this code for create inbox personal folder and save same mails at the datagrid view (outlook 2007 and vsto 2008) i am able to create inbox folder according to above example but couldn't wire code for save e-mails at that example to save contect they r using following code if (chkVerify.Checked) { OutLook._Application outlookObj = new OutLook.Application(); MyContact cntact = new MyContact(); cntact.CustomProperty = txtProp1.Text.Trim().ToString(); //CREATING CONTACT ITEM OBJECT AND FINDING THE CONTACT ITEM OutLook.ContactItem newContact = (OutLook.ContactItem)FindContactItem(cntact, CustomFolder); //THE VALUES WE CAN GET FROM WEB SERVICES OR DATA BASE OR CLASS. WE HAVE TO ASSIGN THE VALUES //TO OUTLOOK CONTACT ITEM OBJECT . if (newContact != null) { newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if(string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } else { //IF THE CONTACT DOES NOT EXIST WITH SAME CUSTOM PROPERTY CREATES THE CONTACT. newContact = (OutLook.ContactItem)CustomFolder.Items.Add(OutLook.OlItemType.olContactItem); newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if (string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } } else { OutLook._Application outlookObj = new OutLook.Application(); OutLook.ContactItem newContact = (OutLook.ContactItem)CustomFolder.Items.Add(OutLook.OlItemType.olContactItem); newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); if (chkAdd.Checked) { //HERE WE CAN CREATE OUR OWN CUSTOM PROPERTY TO IDENTIFY OUR APPLICATION. if (string.IsNullOrEmpty(txtProp1.Text.Trim().ToString())) { MessageBox.Show("please add value to Your Custom Property"); return; } newContact.UserProperties.Add("myPetName", OutLook.OlUserPropertyType.olText, true, OutLook.OlUserPropertyType.olText); newContact.UserProperties["myPetName"].Value = txtProp1.Text.Trim().ToString(); } newContact.Save(); this.Close(); } } else { //CREATES THE OUTLOOK CONTACT IN DEFAULT CONTACTS FOLDER. OutLook._Application outlookObj = new OutLook.Application(); OutLook.MAPIFolder fldContacts = (OutLook.MAPIFolder)outlookObj.Session.GetDefaultFolder(OutLook.OlDefaultFolders.olFolderContacts); OutLook.ContactItem newContact = (OutLook.ContactItem)fldContacts.Items.Add(OutLook.OlItemType.olContactItem); //THE VALUES WE CAN GET FROM WEB SERVICES OR DATA BASE OR CLASS. WE HAVE TO ASSIGN THE VALUES //TO OUTLOOK CONTACT ITEM OBJECT . newContact.FirstName = txtFirstName.Text.Trim().ToString(); newContact.LastName = txtLastName.Text.Trim().ToString(); newContact.Email1Address = txtEmail.Text.Trim().ToString(); newContact.Business2TelephoneNumber = txtPhone.Text.Trim().ToString(); newContact.BusinessAddress = txtAddress.Text.Trim().ToString(); newContact.Save(); this.Close(); } } /// /// ENABLING AND DISABLING THE CUSTOM FOLDER AND PROPERY OPTIONS. /// /// /// private void rdoCustom_CheckedChanged(object sender, EventArgs e) { if (rdoCustom.Checked) { txFolder.Enabled = true; chkAdd.Enabled = true; chkVerify.Enabled = true; txtProp1.Enabled = true; } else { txFolder.Enabled = false; chkAdd.Enabled = false; chkVerify.Enabled = false; txtProp1.Enabled = false; } } i don t have idea to convert it to save e-mails in the datagrid view the data gride view i am mentioning here is containing details (sender address, subject etc.) of unread mails and the i i am did was perform some filter for that mails as follows string senderMailAddress = txtMailAddress.Text.ToLower(); List list = (List)dgvUnreadMails.DataSource; List myUnreadMailList; List filteredList = (List)(from ci in list where ci.SenderAddress.StartsWith(senderMailAddress) select ci).ToList(); dgvUnreadMails.DataSource = filteredList; it was done successfully then i need to save those filtered e-mails to that personal inbox folder i created already for that pls give me some help my issue is that how can i assign outlook object just like they assign it to contacts (name, address, e-mail etc.) because in the e-mails we couldn't find it ..

    Read the article

  • VB.NET - using textfile as source for menus and textboxes

    - by Kenny Bones
    Hi, this is probably a bit tense and I'm not sure if this is possible at all. But basically, I'm trying to create a small application which contains alot of PowerShell-code which I want to run in an easy matter. I've managed to create everything myself and it does work. But all of the PowerShell code is manually hardcoded and this gives me a huge disadvantage. What I was thinking was creating some sort of dynamic structure where I can read a couple of text files (possible a numerous amount of text files) and use these as the source for both the comboboxes and the richtextbox which provovides as the string used to run in PowerShell. I was thinking something like this: Combobox - "Choose cmdlet" - Use "menucode.txt" as source Richtextbox - Use "code.txt" as source But, the thing is, Powershell snippets need a few arguments in order for them to work. So I've got a couple of comboboxes and a few textboxes which provides as input for these arguments. And this is done manually as it is right now. So rewriting this small application should also search the textfile for some keywords and have the comboboxes and textboxes to replace those keywords. And I'm not sure how to do this. So, would this requre a whole lot of textfiles? Or could I use one textfile and separate each PowerShell cmdlet snippets with something? Like some sort of a header? Right now, I've got this code at the eventhandler (ComboBox_SelectedIndexChanged) If ComboBoxFunksjon.Text = "Set attribute" Then TxtBoxUsername.Visible = True End If If chkBoxTextfile.Checked = True Then If txtboxBrowse.Text = "" Then MsgBox("You haven't choses a textfile as input for usernames") End If LabelAttribute.Visible = True LabelUsername.Visible = False ComboBoxAttribute.Visible = True TxtBoxUsername.Visible = False txtBoxCode.Text = "$users = Get-Content " & txtboxBrowse.Text & vbCrLf & "foreach ($a in $users)" & vbCrLf & "{" & vbCrLf & "Set-QADUser -Identity $a -ObjectAttributes @{" & ComboBoxAttribute.SelectedItem & "='" & TxtBoxValue.Text & "'}" & vbCrLf & "}" If ComboBoxAttribute.SelectedItem = "Outlook WebAccess" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "OWA Enabled?" txtBoxCode.Text = "$users = Get-Content " & txtboxBrowse.Text & vbCrLf & "foreach ($a in $users)" & vbCrLf & "{" & vbCrLf & "Set-CASMailbox -Identity $a -OWAEnabled" & " " & "$" & CheckBoxValue.Checked & " '}" & vbCrLf & "}" End If If ComboBoxAttribute.SelectedItem = "MobileSync" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "MobileSync Enabled?" Dim value If CheckBoxValue.Checked = True Then value = "0" Else value = "7" End If txtBoxCode.Text = "$users = Get-Content " & txtboxBrowse.Text & vbCrLf & "foreach ($a in $users)" & vbCrLf & "{" & vbCrLf & "Set-QADUser -Identity $a -ObjectAttributes @{msExchOmaAdminWirelessEnable='" & value & " '}" & vbCrLf & "}" End If Else LabelAttribute.Visible = True LabelUsername.Visible = True ComboBoxAttribute.Visible = True txtBoxCode.Text = "Set-QADUser -Identity " & TxtBoxUsername.Text & " -ObjectAttributes @{" & ComboBoxAttribute.SelectedItem & "='" & TxtBoxValue.Text & " '}" If ComboBoxAttribute.SelectedItem = "Outlook WebAccess" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "OWA Enabled?" txtBoxCode.Text = "Set-CASMailbox " & TxtBoxUsername.Text & " -OWAEnabled " & "$" & CheckBoxValue.Checked End If If ComboBoxAttribute.SelectedItem = "MobileSync" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "MobileSync Enabled?" Dim value If CheckBoxValue.Checked = True Then value = "0" Else value = "7" End If txtBoxCode.Text = "Set-QADUser " & TxtBoxUsername.Text & " -ObjectAttributes @{msExchOmaAdminWirelessEnable='" & value & "'}" End If End If Now, this snippet above lets me either use a text file as a source for each username used in the powershell snippet. Just so you know :) And I know, this is probably coded as stupidly as it gets. But it does work! :)

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >