Search Results

Search found 17187 results on 688 pages for 'give me chicken'.

Page 687/688 | < Previous Page | 683 684 685 686 687 688  | Next Page >

  • How do I improve my performance with this singly linked list struct within my program?

    - by Jesus
    Hey guys, I have a program that does operations of sets of strings. We have to implement functions such as addition and subtraction of two sets of strings. We are suppose to get it down to the point where performance if of O(N+M), where N,M are sets of strings. Right now, I believe my performance is at O(N*M), since I for each element of N, I go through every element of M. I'm particularly focused on getting the subtraction to the proper performance, as if I can get that down to proper performance, I believe I can carry that knowledge over to the rest of things I have to implement. The '-' operator is suppose to work like this, for example. Declare set1 to be an empty set. Declare set2 to be a set with { a b c } elements Declare set3 to be a set with ( b c d } elements set1 = set2 - set3 And now set1 is suppose to equal { a }. So basically, just remove any element from set3, that is also in set2. For the addition implementation (overloaded '+' operator), I also do the sorting of the strings (since we have to). All the functions work right now btw. So I was wondering if anyone could a) Confirm that currently I'm doing O(N*M) performance b) Give me some ideas/implementations on how to improve the performance to O(N+M) Note: I cannot add any member variables or functions to the class strSet or to the node structure. The implementation of the main program isn't very important, but I will post the code for my class definition and the implementation of the member functions: strSet2.h (Implementation of my class and struct) // Class to implement sets of strings // Implements operators for union, intersection, subtraction, // etc. for sets of strings // V1.1 15 Feb 2011 Added guard (#ifndef), deleted using namespace RCH #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> // Deleted: using namespace std; 15 Feb 2011 RCH struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor ~strSet (); // Destructor int SIZE() const; bool isMember (std::string s) const; strSet operator + (const strSet& rtSide); // Union strSet operator - (const strSet& rtSide); // Set subtraction strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ strSet2.cpp (implementation of member functions) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { if(copy.first == NULL) { first = NULL; } else { node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } strSet::~strSet() { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } } int strSet::SIZE() const { int size = 0; node *temp = first; while(temp!=NULL) { size++; temp=temp->next; } return size; } bool strSet::isMember(string s) const { node *temp = first; while(temp != NULL) { if(temp->s1 == s) { return true; } temp = temp->next; } return false; } strSet strSet::operator + (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string newEle = temp->s1; if(!isMember(newEle)) { if(newSet.first==NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first = newNode; } else if(newSet.SIZE() == 1) { if(newEle < newSet.first->s1) { node *tempNext = newSet.first; node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = tempNext; newSet.first = newNode; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first->next = newNode; } } else { node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if(newEle < curr->s1) { if(prev == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; newSet.first = newNode; break; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; prev->next = newNode; break; } } if(curr->next == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; curr->next = newNode; break; } prev = curr; curr = curr->next; } } } temp = temp->next; } return newSet; } strSet strSet::operator - (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string element = temp->s1; node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if( element < curr->s1 ) break; if( curr->s1 == element ) { if( prev == NULL) { node *duplicate = curr; newSet.first = newSet.first->next; delete duplicate; break; } else { node *duplicate = curr; prev->next = curr->next; delete duplicate; break; } } prev = curr; curr = curr->next; } temp = temp->next; } return newSet; } strSet& strSet::operator = (const strSet& rtSide) { if(this != &rtSide) { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } if(rtSide.first == NULL) { first = NULL; } else { node *n = rtSide.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } return *this; }

    Read the article

  • Segmentation fault in my C program

    - by user233542
    I don't understand why this would give me a seg fault. Any ideas? This is the function that returns the signal to stop the program (plus the other function that is called within this): double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 > tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } double shoot(double Sol[N],double A,double c) { int i,j; /*Initial Conditions*/ for (i=0;i<buff;i++) { Sol[i] = 0.; } for (i=buff+l;i<N;i++) { Sol[i] = 2.*Pi; } Sol[buff]= 0; Sol[buff+1]= A*exp(sqrt(1+3*c)*dx); for (i=buff+2;i<buff+l;i++) { Sol[i] = (dx*dx)*( sin(Sol[i-1]) + c*sin(3.*(Sol[i-1])) ) - Sol[i-2] + 2.*Sol[i-1]; } return Sol[i-1]; } The values buff, l, N are defined using a #define statement. l = 401, buff = 50, N = 2000 Here is the full code: #include <stdio.h> #include <stdlib.h> #include <math.h> #define w 10 /*characteristic width of a soliton*/ #define dx 0.05 /*distance between lattice sites*/ #define s (2*w)/dx /*size of soliton shape*/ #define l (int)(s+1) /*array length for soliton*/ #define N (int)2000 /*length of field array--lattice sites*/ #define Pi (double)4*atan(1) #define buff (int)50 double shoot(double Sol[N],double A,double c); double bisect(double A0,double A1,double Sol[N],double tol,double c); void super_pos(double antiSol[N],double Sol[N],double phi[][N]); void vel_ver(double phi[][N],double v,double c,int tsteps,double dt); int main(int argc, char **argv) { double c,Sol[N],antiSol[N],A,A0,A1,tol,v,dt; int tsteps,i; FILE *fp1,*fp2,*fp3; fp1 = fopen("soliton.dat","w"); fp2 = fopen("final-phi.dat","w"); fp3 = fopen("energy.dat","w"); printf("Please input the number of time steps:"); scanf("%d",&tsteps); printf("Also, enter the time step size:"); scanf("%lf",&dt); do{ printf("Please input the parameter c in the interval [-1/3,1]:"); scanf("%lf",&c);} while(c < (-1./3.) || c > 1.); printf("Please input the inital speed of eiter soliton:"); scanf("%lf",&v); double phi[tsteps+1][N]; tol = 0.0000001; A0 = 0.; A1 = 2.*Pi; A = bisect(A0,A1,Sol,tol,c); shoot(Sol,A,c); for (i=0;i<N;i++) { fprintf(fp1,"%d\t",i); fprintf(fp1,"%lf\n",Sol[i]); } fclose(fp1); super_pos(antiSol,Sol,phi); /*vel_ver(phi,v,c,tsteps,dt); for (i=0;i<N;i++){ fprintf(fp2,"%d\t",i); fprintf(fp2,"%lf\n",phi[tsteps][i]); }*/ } double shoot(double Sol[N],double A,double c) { int i,j; /*Initial Conditions*/ for (i=0;i<buff;i++) { Sol[i] = 0.; } for (i=buff+l;i<N;i++) { Sol[i] = 2.*Pi; } Sol[buff]= 0; Sol[buff+1]= A*exp(sqrt(1+3*c)*dx); for (i=buff+2;i<buff+l;i++) { Sol[i] = (dx*dx)*( sin(Sol[i-1]) + c*sin(3.*(Sol[i-1])) ) - Sol[i-2] + 2.*Sol[i-1]; } return Sol[i-1]; } double bisect(double A0,double A1,double Sol[N],double tol,double c) { double Amid,shot; while (A1-A0 > tol) { Amid = 0.5*(A0+A1); shot = shoot(Sol, Amid, c); if (shot==2.*Pi) { return Amid; } if (shot > 2.*Pi){ A1 = Amid; } else if (shot < 2.*Pi){ A0 = Amid; } } return 0.5*(A1+A0); } void super_pos(double antiSol[N],double Sol[N],double phi[][N]) { int i; /*for (i=0;i<N;i++) { phi[i]=0; } for (i=buffer+s;i<1950-s;i++) { phi[i]=2*Pi; }*/ for (i=0;i<N;i++) { antiSol[i] = Sol[N-i]; } /*for (i=0;i<s+1;i++) { phi[buffer+j] = Sol[j]; phi[1549+j] = antiSol[j]; }*/ for (i=0;i<N;i++) { phi[0][i] = antiSol[i] + Sol[i] - 2.*Pi; } } /* This funciton will set the 2nd input array to the derivative at the time t, for all points x in the lattice */ void deriv2(double phi[][N],double DphiDx2[][N],int t) { //double SolDer2[s+1]; int x; for (x=0;x<N;x++) { DphiDx2[t][x] = (phi[buff+x+1][t] + phi[buff+x-1][t] - 2.*phi[x][t])/(dx*dx); } /*for (i=0;i<N;i++) { ptr[i] = &SolDer2[i]; }*/ //return DphiDx2[x]; } void vel_ver(double phi[][N],double v,double c,int tsteps,double dt) { int t,x; double d1,d2,dp,DphiDx1[tsteps+1][N],DphiDx2[tsteps+1][N],dpdt[tsteps+1][N],p[tsteps+1][N]; for (t=0;t<tsteps;t++){ if (t==0){ for (x=0;x<N;x++){//inital conditions deriv2(phi,DphiDx2,t); dpdt[t][x] = DphiDx2[t][x] - sin(phi[t][x]) - sin(3.*phi[t][x]); DphiDx1[t][x] = (phi[t][x+1] - phi[t][x])/dx; p[t][x] = -v*DphiDx1[t][x]; } } for (x=0;x<N;x++){//velocity-verlet phi[t+1][x] = phi[t][x] + dt*p[t][x] + (dt*dt/2)*dpdt[t][x]; p[t+1][x] = p[t][x] + (dt/2)*dpdt[t][x]; deriv2(phi,DphiDx2,t+1); dpdt[t][x] = DphiDx2[t][x] - sin(phi[t+1][x]) - sin(3.*phi[t+1][x]); p[t+1][x] += (dt/2)*dpdt[t+1][x]; } } } So, this really isn't due to my overwriting the end of the Sol array. I've commented out both functions that I suspected of causing the problem (bisect or shoot) and inserted a print function. Two things happen. When I have code like below: double A,Pi,B,c; c=0; Pi = 4.*atan(1.); A = Pi; B = 1./4.; printf("%lf",B); B = shoot(Sol,A,c); printf("%lf",B); I get a segfault from the function, shoot. However, if I take away the shoot function so that I have: double A,Pi,B,c; c=0; Pi = 4.*atan(1.); A = Pi; B = 1./4.; printf("%lf",B); it gives me a segfault at the printf... Why!?

    Read the article

  • Please help me to Rectify my code.

    - by Parth
    Please help me to rectify my code. Here I have described the code what and why I am using... and finally whatI am getting at the end, but the end output is not the way what I want... Please help and tell how I can rectify it... $count = substr_count($row4['ACTION_STATEMENT'], " IF (NEW.");/*here i will get how many time i will get " IF (NEW." in my string.*/ $exp1 = explode("(NEW.",$row4['ACTION_STATEMENT']);/*I exploded it from "NEW."*/ /*echo "<pre>"; print_r($exp1);*/ for($i=1;$i<count($exp1);$i++)/*Loop for values in $exp1*/ { //echo $exp1[$i]; $exp2[] = explode(" !=",$exp1[$i]);/*exploded by " !="*/ }//print_r($exp2); $flag = true; if($flag == true) { $column = mysqli_query($link,"SELECT * FROM COLUMNS WHERE TABLE_SCHEMA = '".$row3['TABLE_SCHEMA']."' and TABLE_NAME = '".$row3['TABLE_NAME']."'"); /*This query will give me 21 values*/ while ($row5 = mysqli_fetch_array($column)) {/*echo "<pre>pd"; print_r($row5);*/ foreach($exp2 as $fieldsarr)/*loop used for further comaprison of $exp2 with above query values*/ { echo "<br>"; //print_r($fieldsarr); if($fieldsarr[0] == $row5['COLUMN_NAME'] )/*Comparison of values*/ { if($fieldsarr[0]!='id') {//echo $fieldsarr[0]; mysqli_select_db($link,'pranav_test'); $aud = mysqli_query($link,"SELECT * FROM `jos_menuaudit`") or die("DEAD".mysqli_error()); while($audit = mysqli_fetch_array($aud)) { echo "<pre>"; echo $fieldsarr[0].$row5['COLUMN_NAME']; print_r($audit); /*Values displayed according to query written above after true comparsion of conditions*/ } } } } mysqli_select_db($link,'information_schema'); } } Now from above code The output I am getting is namenameArray ( [0] => 1 [id] => 1 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_PSD_111 [oldvalue] => test_PSD_111 [4] => test_SPD_111 [newvalue] => test_SPD_111 [5] => 2010-03-24 11:42:26 [changedone] => 2010-03-24 11:42:26 ) namenameArray ( [0] => 2 [id] => 2 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_SPD_111 [oldvalue] => test_SPD_111 [4] => test_SD_111 [newvalue] => test_SD_111 [5] => 2010-03-24 11:44:22 [changedone] => 2010-03-24 11:44:22 ) namenameArray ( [0] => 3 [id] => 3 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_SD_111 [oldvalue] => test_SD_111 [4] => test_PSD_111 [newvalue] => test_PSD_111 [5] => 2010-03-24 11:46:28 [changedone] => 2010-03-24 11:46:28 ) namenameArray ( [0] => 4 [id] => 4 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_PSD_111 [oldvalue] => test_PSD_111 [4] => test_PD_111 [newvalue] => test_PD_111 [5] => 2010-03-24 11:47:30 [changedone] => 2010-03-24 11:47:30 ) namenameArray ( [0] => 5 [id] => 5 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_PD_111 [oldvalue] => test_PD_111 [4] => test_P_111 [newvalue] => test_P_111 [5] => 2010-03-24 11:49:25 [changedone] => 2010-03-24 11:49:25 ) aliasaliasArray ( [0] => 1 [id] => 1 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_PSD_111 [oldvalue] => test_PSD_111 [4] => test_SPD_111 [newvalue] => test_SPD_111 [5] => 2010-03-24 11:42:26 [changedone] => 2010-03-24 11:42:26 ) aliasaliasArray ( [0] => 2 [id] => 2 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_SPD_111 [oldvalue] => test_SPD_111 [4] => test_SD_111 [newvalue] => test_SD_111 [5] => 2010-03-24 11:44:22 [changedone] => 2010-03-24 11:44:22 ) aliasaliasArray ( [0] => 3 [id] => 3 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_SD_111 [oldvalue] => test_SD_111 [4] => test_PSD_111 [newvalue] => test_PSD_111 [5] => 2010-03-24 11:46:28 [changedone] => 2010-03-24 11:46:28 ) aliasaliasArray ( [0] => 4 [id] => 4 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_PSD_111 [oldvalue] => test_PSD_111 [4] => test_PD_111 [newvalue] => test_PD_111 [5] => 2010-03-24 11:47:30 [changedone] => 2010-03-24 11:47:30 ) aliasaliasArray ( [0] => 5 [id] => 5 [1] => 0 [menuid] => 0 [2] => name [field] => name [3] => test_PD_111 [oldvalue] => test_PD_111 [4] => test_P_111 [newvalue] => test_P_111 [5] => 2010-03-24 11:49:25 [changedone] => 2010-03-24 11:49:25 ) thatis, all the five values from query are displayed on every single comaprison getting true. Now here What I want is after the completion of every comparison I want to display the final query result Only Once... How to accomplish this..Please help....

    Read the article

  • while I scroll between the layout it takes too long to be able to scroll between the gallerie's pictures. Is there any way to reduce this time?

    - by Mateo
    Hello, this is my first question here, though I've being reading this forum for quite a while. Most of the answers to my doubts are from here :) Getting back on topic. I'm developing an Android application. I'm drawing a dynamic layout that are basically Galleries, inside a LinearLayout, inside a ScrollView, inside a RelativeLayout. The ScrollView is a must, because I'm drawing a dynamic amount of galleries that most probably will not fit on the screen. When I scroll inside the layout, I have to wait 3/4 seconds until the ScrollView "deactivates" to be able to scroll inside the galleries. What I want to do is to reduce this time to a minimum. Preferably I would like to be able to scroll inside the galleries as soon as I lift my finger from the screen, though anything lower than 2 seconds would be great as well. I've being googling around for a solution but all I could find until now where layout tutorials that didn't tackle this particular issue. I was hoping someone here knows if this is possible and if so to give me some hints on how to do so. I would prefer not to do my own ScrollView to solve this. But if that is the only way I would appreciate some help because I'm not really sure how would I solve this issue by doing that. this is my layout: public class PicturesL extends Activity implements OnClickListener, OnItemClickListener, OnItemLongClickListener { private ArrayList<ImageView> imageView = new ArrayList<ImageView>(); private StringBuilder PicsDate = new StringBuilder(); private CaWaApplication application; private long ListID; private ArrayList<Gallery> gallery = new ArrayList<Gallery>(); private ArrayList<Bitmap> Thumbails = new ArrayList<Bitmap>(); private String idioma; private ArrayList<Long> Days = new ArrayList<Long>(); private long oldDay; private long oldThumbsLoaded; private ArrayList<Long> ThumbailsDays = new ArrayList<Long>(); private ArrayList<ArrayList<Long>> IDs = new ArrayList<ArrayList<Long>>(); @Override public void onCreate(Bundle savedInstancedState) { super.onCreate(savedInstancedState); RelativeLayout layout = new RelativeLayout(this); ScrollView scroll = new ScrollView(this); LinearLayout realLayout = new LinearLayout(this); ArrayList<TextView> texts = new ArrayList<TextView>(); Button TakePic = new Button(this); idioma = com.mateloft.cawa.prefs.getLang(this); if (idioma.equals("en")) { TakePic.setText("Take Picture"); } else if (idioma.equals("es")) { TakePic.setText("Sacar Foto"); } RelativeLayout.LayoutParams scrollLP = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.FILL_PARENT, RelativeLayout.LayoutParams.FILL_PARENT); layout.addView(scroll, scrollLP); realLayout.setOrientation(LinearLayout.VERTICAL); realLayout.setLayoutParams(new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT)); scroll.addView(realLayout); TakePic.setId(67); TakePic.setOnClickListener(this); application = (CaWaApplication) getApplication(); ListID = getIntent().getExtras().getLong("listid"); getAllThumbailsOfID(); LinearLayout.LayoutParams TakeLP = new LinearLayout.LayoutParams(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); realLayout.addView(TakePic); oldThumbsLoaded = 0; int galler = 100; for (int z = 0; z < Days.size(); z++) { ThumbailsManager croppedThumbs = new ThumbailsManager(Thumbails, oldThumbsLoaded, ThumbailsDays.get(z)); oldThumbsLoaded = ThumbailsDays.get(z); texts.add(new TextView(this)); texts.get(z).setText("Day " + Days.get(z).toString()); gallery.add(new Gallery(this)); gallery.get(z).setAdapter(new ImageAdapter(this, croppedThumbs.getGallery(), 250, 175, true, ListID)); gallery.get(z).setOnItemClickListener(this); gallery.get(z).setOnItemLongClickListener(this); gallery.get(z).setId(galler); galler++; realLayout.addView(texts.get(z)); realLayout.addView(gallery.get(z)); } Log.d("PicturesL", "ListID: " + ListID); setContentView(layout); } private void getAllThumbailsOfID() { ArrayList<ModelPics> Pictures = new ArrayList<ModelPics>(); ArrayList<String> ThumbailsPath = new ArrayList<String>(); Pictures = application.dataManager.selectAllPics(); long thumbpathloaded = 0; int currentID = 0; for (int x = 0; x < Pictures.size(); x++) { if (Pictures.get(x).walkname == ListID) { if (Days.size() == 0) { Days.add(Pictures.get(x).day); oldDay = Pictures.get(x).day; IDs.add(new ArrayList<Long>()); currentID = 0; } if (oldDay != Pictures.get(x).day) { oldDay = Pictures.get(x).day; ThumbailsDays.add(thumbpathloaded); Days.add(Pictures.get(x).day); IDs.add(new ArrayList<Long>()); currentID++; } StringBuilder tpath = new StringBuilder(); tpath.append(Pictures.get(x).path.substring(0, Pictures.get(x).path.length() - 4)); tpath.append("-t.jpg"); IDs.get(currentID).add(Pictures.get(x).id); ThumbailsPath.add(tpath.toString()); thumbpathloaded++; if (x == Pictures.size() - 1) { Log.d("PicturesL", "El ultimo de los arrays, tamaño: " + Days.size()); ThumbailsDays.add(thumbpathloaded); } } } for (int y = 0; y < ThumbailsPath.size(); y++) { Thumbails.add(BitmapFactory.decodeFile(ThumbailsPath.get(y))); } } I had a memory leak on another activity when screen orientation changed that was making it slower, now it is working better. The scroller is not locking up. But sometimes, when it stops scrolling, it takes a few seconds (2/3) to disable itself. I just want it to be a little more dynamic, is there any way to override the listener and make it stop scrolling ON_ACTION_UP or something like that? I don't want to use the listview because I want to have each gallery separated by other views, now I just have text, but I will probably separate them with images with a different size than the galleries. I'm not really sure if this is possible with a listadapter and a listview, I assumed that a view can only handle only one type of object, so I'm using a scrollview of a layout, if I'm wrong please correct me :) Also this activity works as a preview or selecting the pictures you want to view in full size and manage their values. So its working only with thumbnails. Each one weights 40 kb. Guessing that is very unlikely that a user gets more than 1000~1500 pictures in this view, i thought that the activity wouldn't use more than 40~50 mb of ram in this case, adding 10 more if I open the fullsized view. So I guessed as well most devices are able to display this view in full size. If it doesn't work on low-end devices my plan was to add an option in the app preferences to let user chop this view according to some database values. And a last reason is that during most of this activity "life-cycle" (the app has pics that are relevant to the view, when it ends the value that selects which pictures are displayed has to change and no more pictures are added inside this instance of this activity); the view will be unpopulated, so most of the time showing everything wont cost much, just at the end of its cycle That was more or less what I thought at the time i created this layout. I'm open to any sort of suggestion or opinion, I just created this layout a few days ago and I'm trying to see if it can work right, because it suits my app needs. Though if there is a better way i would love to hear it Thanks Mateo

    Read the article

  • c++ queue template

    - by Dalton Conley
    ALright, pardon my messy code please. Below is my queue class. #include <iostream> using namespace std; #ifndef QUEUE #define QUEUE /*---------------------------------------------------------------------------- Student Class # Methods # Student() // default constructor Student(string, int) // constructor display() // out puts a student # Data Members # Name // string name Id // int id ----------------------------------------------------------------------------*/ class Student { public: Student() { } Student(string iname, int iid) { name = iname; id = iid; } void display(ostream &out) const { out << "Student Name: " << name << "\tStudent Id: " << id << "\tAddress: " << this << endl; } private: string name; int id; }; // define a typedef of a pointer to a student. typedef Student * StudentPointer; template <typename T> class Queue { public: /*------------------------------------------------------------------------ Queue Default Constructor Preconditions: none Postconditions: assigns default values for front and back to 0 description: constructs a default empty Queue. ------------------------------------------------------------------------*/ Queue() : myFront(0), myBack(0) {} /*------------------------------------------------------------------------ Copy Constructor Preconditions: requres a reference to a value for which you are copying Postconditions: assigns a copy to the parent Queue. description: Copys a queue and assigns it to the parent Queue. ------------------------------------------------------------------------*/ Queue(const T & q) { myFront = myBack = 0; if(!q.empty()) { // copy the first node myFront = myBack = new Node(q.front()); NodePointer qPtr = q.myFront->next; while(qPtr != NULL) { myBack->next = new Node(qPtr->data); myBack = myBack->next; qPtr = qPtr->next; } } } /*------------------------------------------------------------------------ Destructor Preconditions: none Postconditions: deallocates the dynamic memory for the Queue description: deletes the memory stored for a Queue. ------------------------------------------------------------------------*/ ~Queue() { NodePointer prev = myFront, ptr; while(prev != NULL) { ptr = prev->next; delete prev; prev = ptr; } } /*------------------------------------------------------------------------ Empty() Preconditions: none Postconditions: returns a boolean value. description: returns true/false based on if the queue is empty or full. ------------------------------------------------------------------------*/ bool empty() const { return (myFront == NULL); } /*------------------------------------------------------------------------ Enqueue Preconditions: requires a constant reference Postconditions: allocates memory and appends a value at the end of a queue description: ------------------------------------------------------------------------*/ void enqueue(const T & value) { NodePointer newNodePtr = new Node(value); if(empty()) { myFront = myBack = newNodePtr; newNodePtr->next = NULL; } else { myBack->next = newNodePtr; myBack = newNodePtr; newNodePtr->next = NULL; } } /*------------------------------------------------------------------------ Display Preconditions: requires a reference of type ostream Postconditions: returns the ostream value (for chaining) description: outputs the contents of a queue. ------------------------------------------------------------------------*/ void display(ostream & out) const { NodePointer ptr; ptr = myFront; while(ptr != NULL) { out << ptr->data << " "; ptr = ptr->next; } out << endl; } /*------------------------------------------------------------------------ Front Preconditions: none Postconditions: returns a value of type T description: returns the first value in the parent Queue. ------------------------------------------------------------------------*/ T front() const { if ( !empty() ) return (myFront->data); else { cerr << "*** Queue is empty -- returning garbage value ***\n"; T * temp = new(T); T garbage = * temp; delete temp; return garbage; } } /*------------------------------------------------------------------------ Dequeue Preconditions: none Postconditions: removes the first value in a queue ------------------------------------------------------------------------*/ void dequeue() { if ( !empty() ) { NodePointer ptr = myFront; myFront = myFront->next; delete ptr; if(myFront == NULL) myBack = NULL; } else { cerr << "*** Queue is empty -- " "can't remove a value ***\n"; exit(1); } } /*------------------------------------------------------------------------ pverloaded = operator Preconditions: requires a constant reference Postconditions: returns a const type T description: this allows assigning of queues to queues ------------------------------------------------------------------------*/ Queue<T> & operator=(const T &q) { // make sure we arent reassigning ourself // e.g. thisQueue = thisQueue. if(this != &q) { this->~Queue(); if(q.empty()) { myFront = myBack = NULL; } else { myFront = myBack = new Node(q.front()); NodePointer qPtr = q.myFront->next; while(qPtr != NULL) { myBack->next = new Node(qPtr->data); myBack = myBack->next; qPtr = qPtr->next; } } } return *this; } private: class Node { public: T data; Node * next; Node(T value, Node * first = 0) : data(value), next(first) {} }; typedef Node * NodePointer; NodePointer myFront, myBack, queueSize; }; /*------------------------------------------------------------------------ join Preconditions: requires 2 queue values Postconditions: appends queue2 to the end of queue1 description: this function joins 2 queues into 1. ------------------------------------------------------------------------*/ template <typename T> Queue<T> join(Queue<T> q1, Queue<T> q2) { Queue<T> q1Copy(q1), q2Copy(q2); Queue<T> jQueue; while(!q1Copy.empty()) { jQueue.enqueue(q1Copy.front()); q1Copy.dequeue(); } while(!q2Copy.empty()) { jQueue.enqueue(q2Copy.front()); q2Copy.dequeue(); } cout << jQueue << endl; return jQueue; } /*---------------------------------------------------------------------------- Overloaded << operator Preconditions: requires a constant reference and a Queue of type T Postconditions: returns the ostream (for chaining) description: this function is overloaded for outputing a queue with << ----------------------------------------------------------------------------*/ template <typename T> ostream & operator<<(ostream &out, Queue<T> &s) { s.display(out); return out; } /*---------------------------------------------------------------------------- Overloaded << operator Preconditions: requires a constant reference and a reference of type Student Postconditions: none description: this function is overloaded for outputing an object of type Student. ----------------------------------------------------------------------------*/ ostream & operator<<(ostream &out, Student &s) { s.display(out); } /*---------------------------------------------------------------------------- Overloaded << operator Preconditions: requires a constant reference and a reference of a pointer to a Student object. Postconditions: none description: this function is overloaded for outputing pointers to Students ----------------------------------------------------------------------------*/ ostream & operator<<(ostream &out, StudentPointer &s) { s->display(out); } #endif Now I'm having some issues with it. For one, when I add 0 to a queue and then I output the queue like so.. Queue<double> qdub; qdub.enqueue(0); cout << qdub << endl; That works, it will output 0. But for example, if I modify that queue in any way.. like.. assign it to a different queue.. Queue<double> qdub1; Queue<double> qdub2; qdub1.enqueue(0; qdub2 = qdub1; cout << qdub2 << endl; It will give me weird values for 0 like.. 7.86914e-316. Help on this would be much appreciated!

    Read the article

  • how to execute two thread simultaneously in java swing?

    - by jcrshankar
    My aim is to select all the files named with MANI.txt which is present in their respective folders and then load path of the MANI.txt files different location in table. After I load the path in the table,I used to select needed path and modifiying those. To load the MANI.txt files taking more time,because it may present more than 30 times in my workspace or etc. until load the files I want to give alarm to the user with help of ProgessBar.Once the list size has been populated I need to disable ProgressBar. Could anyone please help me out on this? import java.awt.*; import javax.swing.*; import javax.swing.table.*; import java.awt.event.*; import java.io.BufferedReader; import java.io.File; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.FileWriter; import java.io.IOException; import java.io.PrintWriter; public class JTableHeaderCheckBox extends JFrame implements ActionListener { Object colNames[] = {"", "Path"}; Object[][] data = {}; DefaultTableModel dtm; JTable table; JButton but; java.util.List list; public void buildGUI() { dtm = new DefaultTableModel(data,colNames); table = new JTable(dtm); table.setAutoResizeMode(JTable.AUTO_RESIZE_OFF); int vColIndex = 0; TableColumn col = table.getColumnModel().getColumn(vColIndex); int width = 10; col.setPreferredWidth(width); int vColIndex1 = 1; TableColumn col1 = table.getColumnModel().getColumn(vColIndex1); int width1 = 500; col1.setPreferredWidth(width1); JFileChooser chooser = new JFileChooser(); //chooser.setCurrentDirectory(new java.io.File(".")); chooser.setDialogTitle("Choose workSpace Path"); chooser.setFileSelectionMode(JFileChooser.DIRECTORIES_ONLY); chooser.setAcceptAllFileFilterUsed(false); if (chooser.showOpenDialog(this) == JFileChooser.APPROVE_OPTION){ System.out.println("getCurrentDirectory(): " + chooser.getCurrentDirectory()); System.out.println("getSelectedFile() : " + chooser.getSelectedFile().getAbsolutePath()); } String path= chooser.getSelectedFile().getAbsolutePath(); File folder = new File(path); Here I need progress bar GatheringFiles ob = new GatheringFiles(); list=ob.returnlist(folder); for(int x = 0; x < list.size(); x++) { dtm.addRow(new Object[]{new Boolean(false),list.get(x).toString()}); } JPanel pan = new JPanel(); JScrollPane sp = new JScrollPane(table); TableColumn tc = table.getColumnModel().getColumn(0); tc.setCellEditor(table.getDefaultEditor(Boolean.class)); tc.setCellRenderer(table.getDefaultRenderer(Boolean.class)); tc.setHeaderRenderer(new CheckBoxHeader(new MyItemListener())); but = new JButton("REMOVE"); JFrame f = new JFrame(); pan.add(sp); but.move(650, 50); but.addActionListener(this); pan.add(but); f.add(pan); f.setSize(700, 100); f.pack(); f.setLocationRelativeTo(null); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.setVisible(true); } class MyItemListener implements ItemListener { public void itemStateChanged(ItemEvent e) { Object source = e.getSource(); if (source instanceof AbstractButton == false) return; boolean checked = e.getStateChange() == ItemEvent.SELECTED; for(int x = 0, y = table.getRowCount(); x < y; x++) { table.setValueAt(new Boolean(checked),x,0); } } } public static void main (String[] args) { SwingUtilities.invokeLater(new Runnable(){ public void run(){ new JTableHeaderCheckBox().buildGUI(); } }); } public void actionPerformed(ActionEvent e) { // TODO Auto-generated method stub if(e.getSource()==but) { System.err.println("table.getRowCount()"+table.getRowCount()); for(int x = 0, y = table.getRowCount(); x < y; x++) { if("true".equals(table.getValueAt(x, 0).toString())) { System.err.println(table.getValueAt(x, 0)); System.err.println(list.get(x).toString()); delete(list.get(x).toString()); } } } } public void delete(String a) { String delete = "C:"; System.err.println(a); try { File inFile = new File(a); if (!inFile.isFile()) { System.out.println("Parameter is not an existing file"); return; } //Construct the new file that will later be renamed to the original filename. File tempFile = new File(inFile.getAbsolutePath() + ".tmp"); BufferedReader br = new BufferedReader(new FileReader(inFile)); PrintWriter pw = new PrintWriter(new FileWriter(tempFile)); String line = null; //Read from the original file and write to the new //unless content matches data to be removed. while ((line = br.readLine()) != null) { System.err.println(line); line = line.replace(delete, " "); pw.println(line); pw.flush(); } pw.close(); br.close(); //Delete the original file if (!inFile.delete()) { System.out.println("Could not delete file"); return; } //Rename the new file to the filename the original file had. if (!tempFile.renameTo(inFile)) System.out.println("Could not rename file"); } catch (FileNotFoundException ex) { ex.printStackTrace(); } catch (IOException ex) { ex.printStackTrace(); } } } class CheckBoxHeader extends JCheckBox implements TableCellRenderer, MouseListener { protected CheckBoxHeader rendererComponent; protected int column; protected boolean mousePressed = false; public CheckBoxHeader(ItemListener itemListener) { rendererComponent = this; rendererComponent.addItemListener(itemListener); } public Component getTableCellRendererComponent( JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { if (table != null) { JTableHeader header = table.getTableHeader(); if (header != null) { rendererComponent.setForeground(header.getForeground()); rendererComponent.setBackground(header.getBackground()); rendererComponent.setFont(header.getFont()); header.addMouseListener(rendererComponent); } } setColumn(column); rendererComponent.setText("Check All"); setBorder(UIManager.getBorder("TableHeader.cellBorder")); return rendererComponent; } protected void setColumn(int column) { this.column = column; } public int getColumn() { return column; } protected void handleClickEvent(MouseEvent e) { if (mousePressed) { mousePressed=false; JTableHeader header = (JTableHeader)(e.getSource()); JTable tableView = header.getTable(); TableColumnModel columnModel = tableView.getColumnModel(); int viewColumn = columnModel.getColumnIndexAtX(e.getX()); int column = tableView.convertColumnIndexToModel(viewColumn); if (viewColumn == this.column && e.getClickCount() == 1 && column != -1) { doClick(); } } } public void mouseClicked(MouseEvent e) { handleClickEvent(e); ((JTableHeader)e.getSource()).repaint(); } public void mousePressed(MouseEvent e) { mousePressed = true; } public void mouseReleased(MouseEvent e) { } public void mouseEntered(MouseEvent e) { } public void mouseExited(MouseEvent e) { } } ****************** import java.io.File; import java.util.*; public class GatheringFiles { public static List returnlist(File folder) { List<File> list = new ArrayList<File>(); List<File> list1 = new ArrayList<File>(); getFiles(folder, list); return list; } private static void getFiles(File folder, List<File> list) { folder.setReadOnly(); File[] files = folder.listFiles(); for(int j = 0; j < files.length; j++) { if( "MANI.txt".equals(files[j].getName())) { list.add(files[j]); } if(files[j].isDirectory()) getFiles(files[j], list); } } }

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • PHP Facebook Cronjob with offline access

    - by Mohamed Salem
    1:the code to greet the user, ask for his permission and store his session data so that we can use a cronjob with his session data afterwards. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "password"; $db_name = "databasename"; #go to line 85, the script actually starts there mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); #you have to create a database to store session values. #if you do not know what columns there should be look at line 76 to see column names. #make them all varchars # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => '121036530138', 'secret' => '9bbec378147064', 'cookie' => false,)); # Lets set up the permissions we need and set the login url in case we need it. $par['req_perms'] = "friends_about_me,friends_education_history,friends_likes, friends_interests,friends_location,friends_religion_politics, friends_work_history,publish_stream,friends_activities, friends_events, friends_hometown,friends_location ,user_interests,user_likes,user_events, user_about_me,user_status,user_work_history,read_requests, read_stream,offline_access,user_religion_politics,email,user_groups"; $loginUrl = $facebook->getLoginUrl($par); function save_session($session){ global $facebook; # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$session['uid']); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a stored session, but is it valid? echo " We have a session, but is it valid?"; try { $attachment = array('access_token' => $session_id[0]); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " our old session is not valid, let's delete saved invalid session data "; $res = mysql_query("delete from facebook_user WHERE uid =".$session['uid']); #save new good session #to see what is our session data: print_r($session); if (is_array($session)) { $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','" . $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } # this should never ever happen echo " Something is terribly wrong: Our old session was bad, and now we cannot get the new session"; return; } echo " Our old stored session is valid "; return $session_id[0]; } else { echo " no stored session, this means the user never subscribed to our application before. "; # let's store the session $session = $facebook->getSession(); if (is_array($session)) { # Yes we have a session! so lets store it! $sql="insert into facebook_user (session_key,uid,expires,secret,access_token,sig) VALUES ('".$session['session_key']."','".$session['uid']."','". $session['expires']."','". $session['secret'] ."','". $session['access_token']."','". $session['sig']."');"; $res = mysql_query($sql); return $session['access_token']; } } } #this is the first meaningful line of this script. $session = $facebook->getSession(); # Is the user already subscribed to our application? if ( is_null($session) ) { # no he is not #send him to permissions page header( "Location: $loginUrl" ); } else { #yes, he is already subscribed, or subscribed just now #in case he just subscribed now, save his session information $access_token=save_session($session); echo " everything is ok"; # write your code here to do something afterwards } ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/indexx.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Fatal error: Call to undefined method Facebook::getSession() in /home/content/28/9687528/html/ss/src/indexx.php on line 86 2:A cronjob template that reads the stored session of a user from database, uses his session data to work on his behalf, like reading status posts or publishing posts etc. <?php $db_server = "localhost"; $db_username = "username"; $db_password = "pass"; $db_name = "database"; # Lets connect to the Database and set up the table $link = mysql_connect($db_server,$db_username,$db_password); mysql_select_db($db_name); # Now lets load the FB GRAPH API require './facebook.php'; // Create our Application instance. global $facebook; $facebook = new Facebook(array( 'appId' => 'appid', 'secret' => 'secret', 'cookie' => false, )); function get_check_session($uidCheck){ global $facebook; # This function basically checks for a stored session and if we have one it returns it # OK lets go to the database and see if we have a session stored $sid=mysql_query("Select access_token from facebook_user WHERE uid =".$uidCheck); $session_id=mysql_fetch_row($sid); if (is_array($session_id)) { # We have a session # but, is it valid? try { $attachment = array('access_token' => $session_id[0],); $ret_code=$facebook->api('/me', 'GET', $attachment); } catch (Exception $e) { # We don't have a good session so echo " User ".$uidCheck." removed the application, or there is some other access problem. "; # let's delete stored data $res = mysql_query("delete from facebook_user where WHERE uid =".$uidCheck); return; } return $session_id[0]; } else { # "no stored session"; echo " error:newsFeedcrontab.php No stored sessions. This should not have happened "; } } # get all users that have given us offline access $users = getUsers(); foreach($users as $user){ # now for each user, check if they are still subscribed to our application echo " Checking user".$user; $access_token=get_check_session($user); # If we've not got an access_token we actually need to login. # but in the crontab, we just log the error, there is no way we can find the user to give us permission here. if ( is_null($access_token) ) { echo " error: newsFeedcrontab.php There is no access token for the user ".$user." "; } else { #we are going to read the newsfeed of user. There are user's friends' posts in this newsfeed try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/home', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get feed of ".$user.$e; } #do something with the result here #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "home" link under connections #we can also read the home of user. Home is the wall of the user who has given us offline access. try{ $attachment = array('access_token' => $access_token); $result=$facebook->api('/me/feed', 'GET', $attachment); }catch(Exception $e){ echo " error: newsfeedcrontab.php, cannot get wall of ".$user.$e; } #do something with the result here # #but what does the result look like? #go to http://developers.facebook.com/docs/reference/api/user/ and click on the "feed" link under connections } } function getUsers(){ $sql = "SELECT distinct(uid) from facebook_user Where 1"; $result = mysql_query($sql); while($row = mysql_fetch_array($result)){ $rows [] = $row['uid']; } print_r($rows); return $rows; } mysql_close($link); ?> error Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/28/9687528/html/ss/src/cron.php:1) in /home/content/28/9687528/html/ss/src/facebook.php on line 49 Warning: mysql_fetch_array(): supplied argument is not a valid MySQL result resource in /home/content/28/9687528/html/ss/src/cron.php on line 110 Warning: Invalid argument supplied for foreach() in /home/content/28/9687528/html/ss/src/cron.php on line 64

    Read the article

  • while uploading image I get errors in logcat

    - by al0ne evenings
    I am uploading image and also sending some parameters with it. I am getting errors in my logcat while doing so. Here is my code public class Camera extends Activity { ImageView ivUserImage; Button bUpload; Intent i; int CameraResult = 0; Bitmap bmp; public String globalUID; UserFunctions userFunctions; String photoName; InputStream is; int serverResponseCode = 0; ProgressDialog dialog = null; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.camera); userFunctions = new UserFunctions(); globalUID = getIntent().getExtras().getString("globalUID"); Toast.makeText(getApplicationContext(), globalUID, Toast.LENGTH_LONG).show(); ivUserImage = (ImageView)findViewById(R.id.ivUserImage); bUpload = (Button)findViewById(R.id.bUpload); openCamera(); } private void openCamera() { i = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(i, CameraResult); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if(resultCode == RESULT_OK) { //set image taken from camera on ImageView Bundle extras = data.getExtras(); bmp = (Bitmap) extras.get("data"); ivUserImage.setImageBitmap(bmp); //Create new Cursor to obtain the file Path for the large image String[] largeFileProjection = { MediaStore.Images.ImageColumns._ID, MediaStore.Images.ImageColumns.DATA }; String largeFileSort = MediaStore.Images.ImageColumns._ID + " DESC"; //final String photoName = MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME; Cursor myCursor = this.managedQuery(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, largeFileProjection, null, null, largeFileSort); try { myCursor.moveToFirst(); //This will actually give you the file path location of the image. final String largeImagePath = myCursor.getString(myCursor.getColumnIndexOrThrow(MediaStore.Images.ImageColumns.DATA)); //final String photoName = myCursor.getString(myCursor.getColumnIndexOrThrow(MediaStore.Images.ImageColumns.BUCKET_DISPLAY_NAME)); //Log.e("URI", largeImagePath); File f = new File("" + largeImagePath); photoName = f.getName(); bUpload.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { dialog = ProgressDialog.show(Camera.this, "", "Uploading file...", true); new Thread(new Runnable() { public void run() { runOnUiThread(new Runnable() { public void run() { //tv.setText("uploading started....."); Toast.makeText(getBaseContext(), "File is uploading...", Toast.LENGTH_LONG).show(); } }); if(imageUpload(globalUID, largeImagePath)) { Toast.makeText(getApplicationContext(), "Image uploaded", Toast.LENGTH_LONG).show(); } else { Toast.makeText(getApplicationContext(), "Error uploading image", Toast.LENGTH_LONG).show(); } //Log.e("Response: ", response); //System.out.println("RES : " + response); } }).start(); } }); } finally { myCursor.close(); } //Uri uriLargeImage = Uri.withAppendedPath(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, String.valueOf(imageId)); //String imageUrl = getRealPathFromURI(uriLargeImage); } } public boolean imageUpload(String uid, String imagepath) { boolean success = false; //Bitmap bitmapOrg = BitmapFactory.decodeResource(getResources(), R.drawable.ic_launcher); Bitmap bitmapOrg = BitmapFactory.decodeFile(imagepath); ByteArrayOutputStream bao = new ByteArrayOutputStream(); bitmapOrg.compress(Bitmap.CompressFormat.JPEG, 90, bao); byte [] ba = bao.toByteArray(); String ba1=Base64.encodeToString(ba, 0); ArrayList<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(); nameValuePairs.add(new BasicNameValuePair("image",ba1)); nameValuePairs.add(new BasicNameValuePair("uid", uid)); try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://www.example.info/androidfileupload/index.php"); httppost.setEntity(new UrlEncodedFormEntity(nameValuePairs)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); if(response != null) { success = true; } is = entity.getContent(); } catch(Exception e) { Log.e("log_tag", "Error in http connection "+e.toString()); } dialog.dismiss(); return success; } Here is my logcat 06-23 11:54:48.555: E/AndroidRuntime(30601): FATAL EXCEPTION: Thread-11 06-23 11:54:48.555: E/AndroidRuntime(30601): java.lang.OutOfMemoryError 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.String.<init>(String.java:513) 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.AbstractStringBuilder.toString(AbstractStringBuilder.java:650) 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.StringBuilder.toString(StringBuilder.java:664) 06-23 11:54:48.555: E/AndroidRuntime(30601): at org.apache.http.client.utils.URLEncodedUtils.format(URLEncodedUtils.java:170) 06-23 11:54:48.555: E/AndroidRuntime(30601): at org.apache.http.client.entity.UrlEncodedFormEntity.<init>(UrlEncodedFormEntity.java:71) 06-23 11:54:48.555: E/AndroidRuntime(30601): at com.zafar.login.Camera.imageUpload(Camera.java:155) 06-23 11:54:48.555: E/AndroidRuntime(30601): at com.zafar.login.Camera$1$1.run(Camera.java:119) 06-23 11:54:48.555: E/AndroidRuntime(30601): at java.lang.Thread.run(Thread.java:1019) 06-23 11:54:53.960: E/WindowManager(30601): Activity com.zafar.login.DashboardActivity has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@405db540 that was originally added here 06-23 11:54:53.960: E/WindowManager(30601): android.view.WindowLeaked: Activity com.zafar.login.DashboardActivity has leaked window com.android.internal.policy.impl.PhoneWindow$DecorView@405db540 that was originally added here 06-23 11:54:53.960: E/WindowManager(30601): at android.view.ViewRoot.<init>(ViewRoot.java:266) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:174) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.WindowManagerImpl.addView(WindowManagerImpl.java:117) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.Window$LocalWindowManager.addView(Window.java:424) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.Dialog.show(Dialog.java:241) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.ProgressDialog.show(ProgressDialog.java:107) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.ProgressDialog.show(ProgressDialog.java:90) 06-23 11:54:53.960: E/WindowManager(30601): at com.zafar.login.Camera$1.onClick(Camera.java:110) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.View.performClick(View.java:2538) 06-23 11:54:53.960: E/WindowManager(30601): at android.view.View$PerformClick.run(View.java:9152) 06-23 11:54:53.960: E/WindowManager(30601): at android.os.Handler.handleCallback(Handler.java:587) 06-23 11:54:53.960: E/WindowManager(30601): at android.os.Handler.dispatchMessage(Handler.java:92) 06-23 11:54:53.960: E/WindowManager(30601): at android.os.Looper.loop(Looper.java:130) 06-23 11:54:53.960: E/WindowManager(30601): at android.app.ActivityThread.main(ActivityThread.java:3691) 06-23 11:54:53.960: E/WindowManager(30601): at java.lang.reflect.Method.invokeNative(Native Method) 06-23 11:54:53.960: E/WindowManager(30601): at java.lang.reflect.Method.invoke(Method.java:507) 06-23 11:54:53.960: E/WindowManager(30601): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) 06-23 11:54:53.960: E/WindowManager(30601): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 06-23 11:54:53.960: E/WindowManager(30601): at dalvik.system.NativeStart.main(Native Method) 06-23 11:54:55.190: I/Process(30601): Sending signal. PID: 30601 SIG: 9

    Read the article

  • Matlab code works with one version but not the other

    - by user1325655
    I have a code that works in Matlab version R2010a but shows errors in matlab R2008a. I am trying to implement a self organizing fuzzy neural network with extended kalman filter. I have the code running but it only works in matlab version R2010a. It doesn't work with other versions. Any help? Code attach function [ c, sigma , W_output ] = SOFNN( X, d, Kd ) %SOFNN Self-Organizing Fuzzy Neural Networks %Input Parameters % X(r,n) - rth traning data from nth observation % d(n) - the desired output of the network (must be a row vector) % Kd(r) - predefined distance threshold for the rth input %Output Parameters % c(IndexInputVariable,IndexNeuron) % sigma(IndexInputVariable,IndexNeuron) % W_output is a vector %Setting up Parameters for SOFNN SigmaZero=4; delta=0.12; threshold=0.1354; k_sigma=1.12; %For more accurate results uncomment the following %format long; %Implementation of a SOFNN model [size_R,size_N]=size(X); %size_R - the number of input variables c=[]; sigma=[]; W_output=[]; u=0; % the number of neurons in the structure Q=[]; O=[]; Psi=[]; for n=1:size_N x=X(:,n); if u==0 % No neuron in the structure? c=x; sigma=SigmaZero*ones(size_R,1); u=1; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else [Q,O,pT_n] = UpdateStructureRecursively(X,Psi,Q,O,d,n); end; KeepSpinning=true; while KeepSpinning %Calculate the error and if-part criteria ae=abs(d(n)-pT_n*O); %approximation error [phi,~]=GetMePhi(x,c,sigma); [maxphi,maxindex]=max(phi); % maxindex refers to the neuron's index if ae>delta if maxphi<threshold %enlarge width [minsigma,minindex]=min(sigma(:,maxindex)); sigma(minindex,maxindex)=k_sigma*minsigma; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else %Add a new neuron and update structure ctemp=[]; sigmatemp=[]; dist=0; for r=1:size_R dist=abs(x(r)-c(r,1)); distIndex=1; for j=2:u if abs(x(r)-c(r,j))<dist distIndex=j; dist=abs(x(r)-c(r,j)); end; end; if dist<=Kd(r) ctemp=[ctemp; c(r,distIndex)]; sigmatemp=[sigmatemp ; sigma(r,distIndex)]; else ctemp=[ctemp; x(r)]; sigmatemp=[sigmatemp ; dist]; end; end; c=[c ctemp]; sigma=[sigma sigmatemp]; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); KeepSpinning=false; u=u+1; end; else if maxphi<threshold %enlarge width [minsigma,minindex]=min(sigma(:,maxindex)); sigma(minindex,maxindex)=k_sigma*minsigma; Psi=GetMePsi(X,c,sigma); [Q,O] = UpdateStructure(X,Psi,d); pT_n=GetMeGreatPsi(x,Psi(n,:))'; else %Do nothing and exit the while KeepSpinning=false; end; end; end; end; W_output=O; end function [Q_next, O_next,pT_n] = UpdateStructureRecursively(X,Psi,Q,O,d,n) %O=O(t-1) O_next=O(t) p_n=GetMeGreatPsi(X(:,n),Psi(n,:)); pT_n=p_n'; ee=abs(d(n)-pT_n*O); %|e(t)| temp=1+pT_n*Q*p_n; ae=abs(ee/temp); if ee>=ae L=Q*p_n*(temp)^(-1); Q_next=(eye(length(Q))-L*pT_n)*Q; O_next=O + L*ee; else Q_next=eye(length(Q))*Q; O_next=O; end; end function [ Q , O ] = UpdateStructure(X,Psi,d) GreatPsiBig = GetMeGreatPsi(X,Psi); %M=u*(r+1) %n - the number of observations [M,~]=size(GreatPsiBig); %Others Ways of getting Q=[P^T(t)*P(t)]^-1 %************************************************************************** %opts.SYM = true; %Q = linsolve(GreatPsiBig*GreatPsiBig',eye(M),opts); % %Q = inv(GreatPsiBig*GreatPsiBig'); %Q = pinv(GreatPsiBig*GreatPsiBig'); %************************************************************************** Y=GreatPsiBig\eye(M); Q=GreatPsiBig'\Y; O=Q*GreatPsiBig*d'; end %This function works too with x % (X=X and Psi is a Matrix) - Gets you the whole GreatPsi % (X=x and Psi is the row related to x) - Gets you just the column related with the observation function [GreatPsi] = GetMeGreatPsi(X,Psi) %Psi - In a row you go through the neurons and in a column you go through number of %observations **** Psi(#obs,IndexNeuron) **** GreatPsi=[]; [N,U]=size(Psi); for n=1:N x=X(:,n); GreatPsiCol=[]; for u=1:U GreatPsiCol=[ GreatPsiCol ; Psi(n,u)*[1; x] ]; end; GreatPsi=[GreatPsi GreatPsiCol]; end; end function [phi, SumPhi]=GetMePhi(x,c,sigma) [r,u]=size(c); %u - the number of neurons in the structure %r - the number of input variables phi=[]; SumPhi=0; for j=1:u % moving through the neurons S=0; for i=1:r % moving through the input variables S = S + ((x(i) - c(i,j))^2) / (2*sigma(i,j)^2); end; phi = [phi exp(-S)]; SumPhi = SumPhi + phi(j); %phi(u)=exp(-S) end; end %This function works too with x, it will give you the row related to x function [Psi] = GetMePsi(X,c,sigma) [~,u]=size(c); [~,size_N]=size(X); %u - the number of neurons in the structure %size_N - the number of observations Psi=[]; for n=1:size_N [phi, SumPhi]=GetMePhi(X(:,n),c,sigma); PsiTemp=[]; for j=1:u %PsiTemp is a row vector ex: [1 2 3] PsiTemp(j)=phi(j)/SumPhi; end; Psi=[Psi; PsiTemp]; %Psi - In a row you go through the neurons and in a column you go through number of %observations **** Psi(#obs,IndexNeuron) **** end; end

    Read the article

  • Mediawiki authenication replacement showing "Login Required" instead of signing user into wiki

    - by arcdegree
    I'm fairly to MediaWiki and needed a way to automatically log users in after they authenticated to a central server (which creates a session and cookie for applications to use). I wrote a custom authentication extension based off of the LDAP Authentication extension and a few others. The extension simply needs to read some session data to create or update a user and then log them in automatically. All the authentication is handled externally. A user would not be able to even access the wiki website without logging in externally. This extension was placed into production which replaced the old standard MediaWiki authentication system. I also merged user accounts to prepare for the change. By default, a user must be logged in to view, edit, or otherwise do anything in the wiki. My problem is that I found if a user had previously used the built-in MediaWiki authentication system and returned to the wiki, my extension would attempt to auto-login the user, however, they would see a "Login Required" page instead of the page they requested like they were an anonymous user. If the user then refreshed the page, they would be able to navigate, edit, etc. From what I can tell, this issue resolves itself after the UserID cookie is reset or created fresh (but has been known to strangely come up sometimes). To replicate, if there is an older User ID in the "USERID" cookie, the user is shown the "Login Required" page which is a poor user experience. Another way of showing this page is by removing the user account from the database and refreshing the wiki page. As a result, the user will again see the "Login Required" page. Does anyone know how I can use debugging to find out why MediaWiki thinks the user is not signed in when the cookies are set properly and all it takes is a page refresh? Here is my extension (simplified a little for this post): <?php $wgExtensionCredits['parserhook'][] = array ( 'name' => 'MyExtension', 'author' => '', ); if (!class_exists('AuthPlugin')) { require_once ( 'AuthPlugin.php' ); } class MyExtensionPlugin extends AuthPlugin { function userExists($username) { return true; } function authenticate($username, $password) { $id = $_SESSION['id']; if($username = $id) { return true; } else { return false; } } function updateUser(& $user) { $name = $user->getName(); $user->load(); $user->mPassword = ''; $user->mNewpassword = ''; $user->mNewpassTime = null; $user->setRealName($_SESSION['name']); $user->setEmail($_SESSION['email']); $user->mEmailAuthenticated = wfTimestampNow(); $user->saveSettings(); return true; } function modifyUITemplate(& $template) { $template->set('useemail', false); $template->set('remember', false); $template->set('create', false); $template->set('domain', false); $template->set('usedomain', false); } function autoCreate() { return true; } function disallowPrefsEditByUser() { return array ( 'wpRealName' => true, 'wpUserEmail' => true, 'wpNick' => true ); } function allowPasswordChange() { return false; } function setPassword( $user, $password ) { return false; } function strict() { return true; } function initUser( & $user ) { } function updateExternalDB( $user ) { return false; } function canCreateAccounts() { return false; } function addUser( $user, $password ) { return false; } function getCanonicalName( $username ) { return $username; } } function SetupAuthMyExtension() { global $wgHooks; global $wgAuth; $wgHooks['UserLoadFromSession'][] = 'Auth_MyExtension_autologin_hook'; $wgHooks['UserLogoutComplete'][] = 'Auth_MyExtension_UserLogoutComplete'; $wgHooks['PersonalUrls'][] = 'Auth_MyExtension_personalURL_hook'; $wgAuth = new MyExtensionPlugin(); } function Auth_MyExtension_autologin_hook($user, &$return_user ) { global $wgUser; global $wgAuth; global $wgContLang; wfSetupSession(); // Give us a user, see if we're around $tmpuser = new User() ; $rc = $tmpuser->newFromSession(); $rc = $tmpuser->load(); if( $rc && $rc->isLoggedIn() ) { if ( $rc->authenticate($rc->getName(), '') ) { return true; } else { $rc->logout(); } } $id = trim($_SESSION['id']); $name = ucfirst(trim($_SESSION['name'])); if (empty($dsid)) { $result = false; // Deny access return true; } $user = User::newFromName($dsid); if (0 == $user->getID() ) { // we have a new user to add... $user->setName( $id); $user->addToDatabase(); $user->setToken(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); } else { $user->saveToCache(); } // update email, real name, etc. $wgAuth->updateUser( $user ); $result = true; // Go ahead and log 'em in $user->setToken(); $user->saveSettings(); $user->setupSession(); $user->setCookies(); return true; } function Auth_MyExtension_personalURL_hook(& $personal_urls, & $title) { global $wgUser; unset( $personal_urls['mytalk'] ); unset($personal_urls['Userlogin']); $personal_urls['userpage']['text'] = $wgUser->getRealName(); foreach (array('login', 'anonlogin') as $k) { if (array_key_exists($k, $personal_urls)) { unset($personal_urls[$k]); } } return true; } function Auth_MyExtension_UserLogoutComplete(&$user, &$inject_html, $old_name) { setcookie( $GLOBALS['wgCookiePrefix'] . '_session', '', time() - 3600, $GLOBALS['wgCookiePath']); setcookie( $GLOBALS['wgCookiePrefix'] . 'UserName', '', time() - 3600, $GLOBALS['wgCookiePath']); setcookie( $GLOBALS['wgCookiePrefix'] . 'UserID', '', time() - 3600, $GLOBALS['wgCookiePath']); setcookie( $GLOBALS['wgCookiePrefix'] . 'Token', '', time() - 3600, $GLOBALS['wgCookiePath']); return true; } ?> Here is part of my LocalSettings.php file: ############################# # Disallow Anonymous Access ############################# $wgGroupPermissions['*']['read'] = false; $wgGroupPermissions['*']['edit'] = false; $wgGroupPermissions['*']['createpage'] = false; $wgGroupPermissions['*']['createtalk'] = false; $wgGroupPermissions['*']['createaccount'] = false; $wgShowIPinHeader = false; # For non-logged in users ############################# # Extension: MyExtension ############################# require_once("$IP/extensions/MyExtension.php"); $wgAutoLogin = true; SetupAuthMyExtension(); $wgDisableCookieCheck = true;

    Read the article

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • Simple Physics Simulation in java not working.

    - by Static Void Main
    Dear experts, I wanted to implement ball physics and as i m newbie, i adapt the code in tutorial http://adam21.web.officelive.com/Documents/JavaPhysicsTutorial.pdf . i try to follow that as i much as i can, but i m not able to apply all physical phenomenon in code, can somebody please tell me, where i m mistaken or i m still doing some silly programming mistake. The balls are moving when i m not calling bounce method and i m unable to avail the bounce method and ball are moving towards left side instead of falling/ending on floor**, Can some body recommend me some better way or similar easy compact way to accomplish this task of applying physics on two ball or more balls with interactivity. here is code ; import java.awt.*; public class AdobeBall { protected int radius = 20; protected Color color; // ... Constants final static int DIAMETER = 40; // ... Instance variables private int m_x; // x and y coordinates upper left private int m_y; private double dx = 3.0; // delta x and y private double dy = 6.0; private double m_velocityX; // Pixels to move each time move() is called. private double m_velocityY; private int m_rightBound; // Maximum permissible x, y values. private int m_bottomBound; public AdobeBall(int x, int y, double velocityX, double velocityY, Color color1) { super(); m_x = x; m_y = y; m_velocityX = velocityX; m_velocityY = velocityY; color = color1; } public double getSpeed() { return Math.sqrt((m_x + m_velocityX - m_x) * (m_x + m_velocityX - m_x) + (m_y + m_velocityY - m_y) * (m_y + m_velocityY - m_y)); } public void setSpeed(double speed) { double currentSpeed = Math.sqrt(dx * dx + dy * dy); dx = dx * speed / currentSpeed; dy = dy * speed / currentSpeed; } public void setDirection(double direction) { m_velocityX = (int) (Math.cos(direction) * getSpeed()); m_velocityY = (int) (Math.sin(direction) * getSpeed()); } public double getDirection() { double h = ((m_x + dx - m_x) * (m_x + dx - m_x)) + ((m_y + dy - m_y) * (m_y + dy - m_y)); double a = (m_x + dx - m_x) / h; return a; } // ======================================================== setBounds public void setBounds(int width, int height) { m_rightBound = width - DIAMETER; m_bottomBound = height - DIAMETER; } // ============================================================== move public void move() { double gravAmount = 0.02; double gravDir = 90; // The direction for the gravity to be in. // ... Move the ball at the give velocity. m_x += m_velocityX; m_y += m_velocityY; // ... Bounce the ball off the walls if necessary. if (m_x < 0) { // If at or beyond left side m_x = 0; // Place against edge and m_velocityX = -m_velocityX; } else if (m_x > m_rightBound) { // If at or beyond right side m_x = m_rightBound; // Place against right edge. m_velocityX = -m_velocityX; } if (m_y < 0) { // if we're at top m_y = 0; m_velocityY = -m_velocityY; } else if (m_y > m_bottomBound) { // if we're at bottom m_y = m_bottomBound; m_velocityY = -m_velocityY; } // double speed = Math.sqrt((m_velocityX * m_velocityX) // + (m_velocityY * m_velocityY)); // ...Friction stuff double fricMax = 0.02; // You can use any number, preferably less than 1 double friction = getSpeed(); if (friction > fricMax) friction = fricMax; if (m_velocityX >= 0) { m_velocityX -= friction; } if (m_velocityX <= 0) { m_velocityX += friction; } if (m_velocityY >= 0) { m_velocityY -= friction; } if (m_velocityY <= 0) { m_velocityY += friction; } // ...Gravity stuff m_velocityX += Math.cos(gravDir) * gravAmount; m_velocityY += Math.sin(gravDir) * gravAmount; } public Color getColor() { return color; } public void setColor(Color newColor) { color = newColor; } // ============================================= getDiameter, getX, getY public int getDiameter() { return DIAMETER; } public double getRadius() { return radius; // radius should be a local variable in Ball. } public int getX() { return m_x; } public int getY() { return m_y; } } using adobeBall: import java.awt.*; import java.awt.event.*; import javax.swing.*; public class AdobeBallImplementation implements Runnable { private static final long serialVersionUID = 1L; private volatile boolean Play; private long mFrameDelay; private JFrame frame; private MyKeyListener pit; /** true means mouse was pressed in ball and still in panel. */ private boolean _canDrag = false; private static final int MAX_BALLS = 50; // max number allowed private int currentNumBalls = 2; // number currently active private AdobeBall[] ball = new AdobeBall[MAX_BALLS]; public AdobeBallImplementation(Color ballColor) { frame = new JFrame("simple gaming loop in java"); frame.setSize(400, 400); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pit = new MyKeyListener(); pit.setPreferredSize(new Dimension(400, 400)); frame.setContentPane(pit); ball[0] = new AdobeBall(34, 150, 7, 2, Color.YELLOW); ball[1] = new AdobeBall(50, 50, 5, 3, Color.BLUE); frame.pack(); frame.setVisible(true); frame.setBackground(Color.white); start(); frame.addMouseListener(pit); frame.addMouseMotionListener(pit); } public void start() { Play = true; Thread t = new Thread(this); t.start(); } public void stop() { Play = false; } public void run() { while (Play == true) { // bounce(ball[0],ball[1]); runball(); pit.repaint(); try { Thread.sleep(mFrameDelay); } catch (InterruptedException ie) { stop(); } } } public void drawworld(Graphics g) { for (int i = 0; i < currentNumBalls; i++) { g.setColor(ball[i].getColor()); g.fillOval(ball[i].getX(), ball[i].getY(), 40, 40); } } public double pointDistance (double x1, double y1, double x2, double y2) { return Math.sqrt((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1)); } public void runball() { while (Play == true) { try { for (int i = 0; i < currentNumBalls; i++) { for (int j = 0; j < currentNumBalls; j++) { if (pointDistance(ball[i].getX(), ball[i].getY(), ball[j].getX(), ball[j].getY()) < ball[i] .getRadius() + ball[j].getRadius() + 2) { // bounce(ball[i],ball[j]); ball[i].setBounds(pit.getWidth(), pit.getHeight()); ball[i].move(); pit.repaint(); } } } try { Thread.sleep(50); } catch (Exception e) { System.exit(0); } } catch (Exception e) { e.printStackTrace(); } } } public static double pointDirection(int x1, int y1, int x2, int y2) { double H = Math.sqrt((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1)); // The // hypotenuse double x = x2 - x1; // The opposite double y = y2 - y1; // The adjacent double angle = Math.acos(x / H); angle = angle * 57.2960285258; if (y < 0) { angle = 360 - angle; } return angle; } public static void bounce(AdobeBall b1, AdobeBall b2) { if (b2.getSpeed() == 0 && b1.getSpeed() == 0) { // Both balls are stopped. b1.setDirection(pointDirection(b1.getX(), b1.getY(), b2.getX(), b2 .getY())); b2.setDirection(pointDirection(b2.getX(), b2.getY(), b1.getX(), b1 .getY())); b1.setSpeed(1); b2.setSpeed(1); } else if (b2.getSpeed() == 0 && b1.getSpeed() != 0) { // B1 is moving. B2 is stationary. double angle = pointDirection(b1.getX(), b1.getY(), b2.getX(), b2 .getY()); b2.setSpeed(b1.getSpeed()); b2.setDirection(angle); b1.setDirection(angle - 90); } else if (b1.getSpeed() == 0 && b2.getSpeed() != 0) { // B1 is moving. B2 is stationary. double angle = pointDirection(b2.getX(), b2.getY(), b1.getX(), b1 .getY()); b1.setSpeed(b2.getSpeed()); b1.setDirection(angle); b2.setDirection(angle - 90); } else { // Both balls are moving. AdobeBall tmp = b1; double angle = pointDirection(b2.getX(), b2.getY(), b1.getX(), b1 .getY()); double origangle = b1.getDirection(); b1.setDirection(angle + origangle); angle = pointDirection(tmp.getX(), tmp.getY(), b2.getX(), b2.getY()); origangle = b2.getDirection(); b2.setDirection(angle + origangle); } } public static void main(String[] args) { javax.swing.SwingUtilities.invokeLater(new Runnable() { public void run() { new AdobeBallImplementation(Color.red); } }); } } *EDIT:*ok splitting the code using new approach for gravity from this forum: this code also not working the ball is not coming on floor: public void mymove() { m_x += m_velocityX; m_y += m_velocityY; if (m_y + m_bottomBound > 400) { m_velocityY *= -0.981; // setY(400 - m_bottomBound); m_y = 400 - m_bottomBound; } // ... Bounce the ball off the walls if necessary. if (m_x < 0) { // If at or beyond left side m_x = 0; // Place against edge and m_velocityX = -m_velocityX; } else if (m_x > m_rightBound) { // If at or beyond right side m_x = m_rightBound - 20; // Place against right edge. m_velocityX = -m_velocityX; } if (m_y < 0) { // if we're at top m_y = 1; m_velocityY = -m_velocityY; } else if (m_y > m_bottomBound) { // if we're at bottom m_y = m_bottomBound - 20; m_velocityY = -m_velocityY; } } thanks a lot for any correction and help. jibby

    Read the article

  • Erroneous/Incorrect C2248 error using Visual Studio 2010

    - by Dylan Bourque
    I'm seeing what I believe to be an erroneous/incorrect compiler error using the Visual Studio 2010 compiler. I'm in the process of up-porting our codebase from Visual Studio 2005 and I ran across a construct that was building correctly before but now generates a C2248 compiler error. Obviously, the code snippet below has been generic-ized, but it is a compilable example of the scenario. The ObjectPtr<T> C++ template comes from our codebase and is the source of the error in question. What appears to be happening is that the compiler is generating a call to the copy constructor for ObjectPtr<T> when it shouldn't (see my comment block in the SomeContainer::Foo() method below). For this code construct, there is a public cast operator for SomeUsefulData * on ObjectPtr<SomeUsefulData> but it is not being chosen inside the true expression if the ?: operator. Instead, I get the two errors in the block quote below. Based on my knowledge of C++, this code should compile. Has anyone else seen this behavior? If not, can someone point me to a clarification of the compiler resolution rules that would explain why it's attempting to generate a copy of the object in this case? Thanks in advance, Dylan Bourque Visual Studio build output: c:\projects\objectptrtest\objectptrtest.cpp(177): error C2248: 'ObjectPtr::ObjectPtr' : cannot access private member declared in class 'ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(25) : see declaration of 'ObjectPtr::ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(177): error C2248: 'ObjectPtr::ObjectPtr' : cannot access private member declared in class 'ObjectPtr' with [ T=SomeUsefulData ] c:\projects\objectptrtest\objectptrtest.cpp(25) : see declaration of 'ObjectPtr::ObjectPtr' with [ T=SomeUsefulData ] Below is a minimal, compilable example of the scenario: #include <stdio.h> #include <tchar.h> template<class T> class ObjectPtr { public: ObjectPtr<T> (T* pObj = NULL, bool bShared = false) : m_pObject(pObj), m_bObjectShared(bShared) {} ~ObjectPtr<T> () { Detach(); } private: // private, unimplemented copy constructor and assignment operator // to guarantee that ObjectPtr<T> objects are not copied ObjectPtr<T> (const ObjectPtr<T>&); ObjectPtr<T>& operator = (const ObjectPtr<T>&); public: T * GetObject () { return m_pObject; } const T * GetObject () const { return m_pObject; } bool HasObject () const { return (GetObject()!=NULL); } bool IsObjectShared () const { return m_bObjectShared; } void ObjectShared (bool bShared) { m_bObjectShared = bShared; } bool IsNull () const { return !HasObject(); } void Attach (T* pObj, bool bShared = false) { Detach(); if (pObj != NULL) { m_pObject = pObj; m_bObjectShared = bShared; } } void Detach (T** ppObject = NULL) { if (ppObject != NULL) { *ppObject = m_pObject; m_pObject = NULL; m_bObjectShared = false; } else { if (HasObject()) { if (!IsObjectShared()) delete m_pObject; m_pObject = NULL; m_bObjectShared = false; } } } void Detach (bool bDeleteIfNotShared) { if (HasObject()) { if (bDeleteIfNotShared && !IsObjectShared()) delete m_pObject; m_pObject = NULL; m_bObjectShared = false; } } bool IsEqualTo (const T * pOther) const { return (GetObject() == pOther); } public: T * operator -> () { ASSERT(HasObject()); return m_pObject; } const T * operator -> () const { ASSERT(HasObject()); return m_pObject; } T & operator * () { ASSERT(HasObject()); return *m_pObject; } const T & operator * () const { ASSERT(HasObject()); return (const C &)(*m_pObject); } operator T * () { return m_pObject; } operator const T * () const { return m_pObject; } operator bool() const { return (m_pObject!=NULL); } ObjectPtr<T>& operator = (T * pObj) { Attach(pObj, false); return *this; } bool operator == (const T * pOther) const { return IsEqualTo(pOther); } bool operator == (T * pOther) const { return IsEqualTo(pOther); } bool operator != (const T * pOther) const { return !IsEqualTo(pOther); } bool operator != (T * pOther) const { return !IsEqualTo(pOther); } bool operator == (const ObjectPtr<T>& other) const { return IsEqualTo(other.GetObject()); } bool operator != (const ObjectPtr<T>& other) const { return !IsEqualTo(other.GetObject()); } bool operator == (int pv) const { return (pv==NULL)? IsNull() : (LPVOID(m_pObject)==LPVOID(pv)); } bool operator != (int pv) const { return !(*this == pv); } private: T * m_pObject; bool m_bObjectShared; }; // Some concrete type that holds useful data class SomeUsefulData { public: SomeUsefulData () {} ~SomeUsefulData () {} }; // Some concrete type that holds a heap-allocated instance of // SomeUsefulData class SomeContainer { public: SomeContainer (SomeUsefulData* pUsefulData) { m_pData = pUsefulData; } ~SomeContainer () { // nothing to do here } public: bool EvaluateSomeCondition () { // fake condition check to give us an expression // to use in ?: operator below return true; } SomeUsefulData* Foo () { // this usage of the ?: operator generates a C2248 // error b/c it's attempting to call the copy // constructor on ObjectPtr<T> return EvaluateSomeCondition() ? m_pData : NULL; /**********[ DISCUSSION ]********** The following equivalent constructs compile w/out error and behave correctly: (1) explicit cast to SomeUsefulData* as a comiler hint return EvaluateSomeCondition() ? (SomeUsefulData *)m_pData : NULL; (2) if/else instead of ?: if (EvaluateSomeCondition()) return m_pData; else return NULL; (3) skip the condition check and return m_pData as a SomeUsefulData* directly return m_pData; **********[ END DISCUSSION ]**********/ } private: ObjectPtr<SomeUsefulData> m_pData; }; int _tmain(int argc, _TCHAR* argv[]) { return 0; }

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Getting sync(uploading) multipale data on the remote server from iphone

    - by Ajay
    i am try to sync or upload the data on the remote server from iphone but not getting it.I try this from 1 week but didn't success how can solve this .I am using the NSURLConnection methods or any one give idea on ASIHTTPRequest method but i am new for *ASIHTTPReques*t .I need this method only .For this code like this -(void)sendRequestforContent { //this for finding the date of sync on the server NSDate* date = [NSDate date]; //Create the dateformatter object NSDateFormatter* formatter = [[[NSDateFormatter alloc] init] autorelease]; //Set the required date format [formatter setDateFormat:@"dd-MMM-yyyy"]; //Get the string date NSString* str = [formatter stringFromDate:date]; NSError *error = nil; NSHTTPURLResponse *response = nil; NSMutableData *postBody = [NSMutableData data]; NSURL *url = [NSURL URLWithString:@"http://www.google.com"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; NSString *boundary = @"-------------------a9d8vyb89089dy70"; NSString *contentType = [NSString stringWithFormat:@"multipart/form-data; boundary=%@", boundary]; [request setHTTPMethod:@"POST"]; [request setValue:contentType forHTTPHeaderField:@"Content-Type"]; [postBody appendData:[[NSString stringWithFormat:@"--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this is for TOKEN_API [postBody appendData:[@"Content-disposition: form-data; name=\"Token\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[tokenapi dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the CONTENT_ID [postBody appendData:[@"Content-disposition: form-data; name=\"contentID\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[content_id dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the CONTENTTYPE_ID [postBody appendData:[@"Content-disposition: form-data; name=\"contentTypeID\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; NSString *ContentTypeString = [NSString stringWithFormat:@"%d",content_type]; [postBody appendData:[ContentTypeString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the CONTENT_Location_Id [postBody appendData:[@"Content-disposition: form-data; name=\"contentLocationID\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[contenLocation_id dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this is for the User_Caption [postBody appendData:[@"Content-disposition: form-data; name=\"userCaption\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[user_caption dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this is for the User_Comment [postBody appendData:[@"Content-disposition: form-data; name=\"userComment\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[user_comment dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the Tags [postBody appendData:[@"Content-disposition: form-data; name=\"tags\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[tag dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the Date_Record [postBody appendData:[@"Content-disposition: form-data; name=\"dateRecorded\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[date_recorded dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the image_data [postBody appendData:[[NSString stringWithFormat:@"Content-disposition: form-data; name=\"image_file\"; filename=\"%@\"\r\n",@"image.jpg"] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[@"Content-Type: image/jpg\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:image]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@--\r\n",boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the Share_type [postBody appendData:[@"Content-disposition: form-data; name=\"shareType\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; NSString *ShareString = [NSString stringWithFormat:@"%d",share_type]; [postBody appendData:[ShareString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the Views [postBody appendData:[@"Content-disposition: form-data; name=\"views\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; NSString *ViewsString = [NSString stringWithFormat:@"%d",views]; [postBody appendData:[ViewsString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the PLAY_time [postBody appendData:[@"Content-disposition: form-data; name=\"playTime\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; NSString *TimeString = [NSString stringWithFormat:@"%d",play_time]; [postBody appendData:[TimeString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; //this for the Posted_By [postBody appendData:[@"Content-disposition: form-data; name=\"postedBy\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[postred_by dataUsingEncoding:NSUTF8StringEncoding]]; //this for the AVG_Rating [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[@"Content-disposition: form-data; name=\"avgRating\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; NSString *AvgString = [NSString stringWithFormat:@"%d",avg_rating]; [postBody appendData:[AvgString dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[@"Content-disposition: form-data; name=\"LastSyncDate\"\r\n\r\n" dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[str dataUsingEncoding:NSUTF8StringEncoding]]; [postBody appendData:[[NSString stringWithFormat:@"\r\n--%@\r\n", boundary] dataUsingEncoding:NSUTF8StringEncoding]]; [request setHTTPBody:postBody]; NSData *shoutData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSString *returnString = [[NSString alloc] initWithData:shoutData encoding:NSUTF8StringEncoding]; NSLog(@"%@",returnString); } it is not going into the this mthods -(void)connectionDidFinishLoading:(NSURLConnection *)connection { loginStatus = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; NSLog(@"%@",loginStatus); } It show me html page on console I hope people help me to solve it out

    Read the article

  • log in and send sms with java

    - by noobed
    I'm trying to log into a site and afterwards to send a SMS (you can do that for free by the site - it's nothing more than just enter some text into some fields and 'submit'). I've used wireshark to track some of the post/get requests that my machine has been exchanging with the server - when using the browser. I'd like to paste some of my Java code: URL url; String urlP = "maccount=myRawUserName7&" + "mpassword=myRawPassword&" + "redirect_http=http&" + "submit=........"; String urlParameters = URLEncoder.encode(urlP, "CP1251"); HttpURLConnection connection = null; // Create connection url = new URL("http://www.mtel.bg/1/mm/smscenter/mc/sendsms/ma/index/mo/1"); connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); //I'm not really sure if these RequestProperties are necessary //so I'll leave them as a comment // connection.setRequestProperty("Content-Type", // "application/x-www-form-urlencoded"); // connection.setRequestProperty("Accept-Charset", "CP1251"); // connection.setRequestProperty("Content-Length", // "" + Integer.toString(urlParameters.getBytes().length)); // connection.setRequestProperty("Content-Language", "en-US"); connection.setUseCaches(false); connection.setDoInput(true); connection.setDoOutput(true); // Send request DataOutputStream wr = new DataOutputStream( connection.getOutputStream()); wr.writeBytes(urlParameters); wr.flush(); wr.close(); String headerName[] = new String[10]; int count = 0; for (int i = 1; (headerName[count] = connection.getHeaderFieldKey(i)) != null; i++) { if (headerName[count].equals("Set-Cookie")) { headerName[count++] = connection.getHeaderField(i); } } //I'm not sure if I have to close the connection here or not if (connection != null) { connection.disconnect(); } //the code above should be the login part //----------------------------------------- //this is copy-pasted from wireshark's info. String smsParam="from=men&" + "sender=0&" + "msisdn=359886737498&" + "tophone=0&" + "smstext=tova+e+proba%21+1.&" + "id=&" + "sendaction=&" + "direction=&" + "msgLen=84"; url = new URL("http://www.mtel.bg/moyat-profil-sms-tsentar_3004/" + "mm/smscenter/mc/sendsms/ma/index"); connection = (HttpURLConnection) url.openConnection(); connection.setRequestMethod("POST"); connection.setRequestProperty("Cookie", headerName[0]); connection.setRequestProperty("Cookie", headerName[1]); //conn urlParameters = URLEncoder.encode(urlP, "CP1251"); connection.setUseCaches(false); connection.setDoInput(true); connection.setDoOutput(true); wr = new DataOutputStream( connection.getOutputStream()); wr.writeBytes(urlParameters); wr.flush(); wr.close(); //I'm not rly sure what exactly to do with this response. // Get Response InputStream is = connection.getInputStream(); BufferedReader rd = new BufferedReader(new InputStreamReader(is, "CP1251")); String line; StringBuffer response = new StringBuffer(); while ((line = rd.readLine()) != null) { response.append(line); response.append('\r'); } rd.close(); System.out.println(response.toString()); if (connection != null) { connection.disconnect(); } so that's my code so far. When I execute it ... I don't receive any text on my phone - so it clearly doesn't work as supposed to. I would appreciate any guidance or remarks. Is my cookie handling wrong? Is my login method wrong? Do I pass the right URLs. Do I encode and send the parameter string correctly? Is there any addition valuable data from these POSTs I should take? P.S. just in any case let me tell you that the username and password is not real. For security reasons I don't want to give valid ones. (I think this is appropriate approach) Here are the POST requests: POST /1/mm/auth/mc/auth/ma/index/mo/1 HTTP/1.1 Host: www.mtel.bg User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Referer: http://www.mtel.bg/1/mm/smscenter/mc/sendsms/ma/index/mo/1 Cookie: __utma=209782857.541729286.1349267381.1349270269.1349274374.3; __utmc=209782857; __utmz=209782857.1349267381.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __atuvc=28%7C40; PHPSESSID=q0mage2usmv34slcv3dmd6t057; __utmb=209782857.3.10.1349274374 Content-Type: multipart/form-data; boundary=---------------------------151901450223722 Content-Length: 475 -----------------------------151901450223722 Content-Disposition: form-data; name="maccount" myRawUserName -----------------------------151901450223722 Content-Disposition: form-data; name="mpassword" myRawPassword -----------------------------151901450223722 Content-Disposition: form-data; name="redirect_https" http -----------------------------151901450223722 Content-Disposition: form-data; name="submit" ........ -----------------------------151901450223722-- HTTP/1.1 302 Found Server: nginx Date: Wed, 03 Oct 2012 14:26:40 GMT Content-Type: text/html; charset=Utf-8 Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/index Content-Length: 0 The above text is vied with wireshark's follow tcp stream when pressing the log in button. POST /moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/index HTTP/1.1 *same as the above ones* Referer: http://www.mtel.bg/moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/index Cookie: __utma=209782857.541729286.1349267381.1349270269.1349274374.3; __utmc=209782857; __utmz=209782857.1349267381.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __atuvc=29%7C40; PHPSESSID=q0mage2usmv34slcv3dmd6t057; __utmb=209782857.4.10.1349274374 Content-Type: application/x-www-form-urlencoded Content-Length: 147 from=men&sender=0&msisdn=35988888888&tophone=0&smstext=this+is+some+FREE+SMS+text%21+100+char+per+sms+only%21&id=&sendaction=&direction=&msgLen=50 HTTP/1.1 302 Found Server: nginx Date: Wed, 03 Oct 2012 14:31:38 GMT Content-Type: text/html; charset=Utf-8 Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /moyat-profil-sms-tsentar_3004/mm/smscenter/mc/sendsms/ma/success/s/1 Content-Length: 0 The above text is when you press the send button.

    Read the article

  • Session is working in Localhost Properly but not Online (Cpanel)

    - by nando pandi
    Hello guys Sorry for my stupid question regarding to my yesterday question its not solved yet even the advice you have given but still not working. i have removed all of spaces but still showing the problem for me. it's working perfect in localhost but not in CPANEL. Here is the errors which give: Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/scalepro/public_html/Admin Panel/Remote Employee/main.php:1) in /home/scalepro/public_html/Admin Panel/Remote Employee/main.php on line 1 Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/scalepro/public_html/Admin Panel/Remote Employee/main.php:1) in /home/scalepro/public_html/Admin Panel/Remote Employee/main.php on line 1 Warning: Cannot modify header information - headers already sent by (output started at /home/scalepro/public_html/Admin Panel/Remote Employee/main.php:1) in /home/scalepro/public_html/Admin Panel/Remote Employee/main.php on line 13 Warning: Unknown: Your script possibly relies on a session side-effect which existed until PHP 4.2.3. Please be advised that the session extension does not consider global variables as a source of data, unless register_globals is enabled. You can disable this functionality and this warning by setting session.bug_compat_42 or session.bug_compat_warn to off, respectively in Unknown on line 0 ANY ONE PLEASE ??? Here is my code: <?php session_start(); require_once('../../Admin Panel/db.php'); if(isset($_POST['email']) && !empty($_POST['email']) && isset($_POST['password']) && !empty($_POST['password'])) { $email = $_POST['email']; $password = $_POST['password']; $query="SELECT RemoteEmployeeFullName, RemoteEmployeeEmail, RemoteEmployeePassword FROM remoteemployees WHERE RemoteEmployeeEmail='".$email."' AND RemoteEmployeePassword='".$password."'"; $queryrun=$connection->query($query); if($queryrun->num_rows > 0) { $_SESSION['email']=$RemoteEmployeeFullName; header("Location: /home/scalepro/public_html/Admin Panel/Remote Employee/REPLists.php"); } else { echo 'Email: <b>'.$email. '</b> or Password <b>'. $password.'</b> Is Not Typed Correctly Try Again Please!.'; header( "refresh:5;url= /home/scalepro/public_html/spd/myaccount.php" ); } } else { header( "refresh:5;url= /home/scalepro/public_html/spd/myaccount.php" ); } ?> if the condition gets true this will be redirected to a page by the name of REPLists.php here is the page. <?php session_start(); require_once('../../Admin Panel/db.php'); ?> <html> <head> <style> .wrapper { width:1250px; height:auto; border:solid 1px #000; margin:0 auto; padding:5px; border-radius:5px; -webkit-border-radius:5px; -moz-border-radius:5px; -ms-border-radius:5px; } .wrapper .header { width:1250px; height:20px; border-bottom:solid 1px #f0eeee; margin:auto 0; margin-bottom:12px; } .wrapper .header div { text-decoration:none; color:#F60; } .wrapper .header div a { text-decoration:none; color:#F60; } .wrapper .Labelcon { width:1250px; height:29px; border-bottom:solid 1px #ccc; } .wrapper .Labelcon .Label { width:125px; height:20px; float:left; text-align:center; border-left:1px solid #f0eeee; font:Verdana, Geneva, sans-serif; font-size:14.3px; font-weight:bold; } .wrapper .Valuecon { width:1250px; height:29px; border-bottom:solid 1px #ccc; color:#F60; text-decoration:none; } .wrapper .Valuecon .Value { width:125px; height:20px; float:left; text-align:center; border-left:1px solid #f0eeee; font-size:14px; } </style> </head> <body> <div class="wrapper"> <div class="header"> <div style="float:left;"><font color="#000000">Email: </font> <?php if(isset($_SESSION['email'])) { echo $_SESSION['email']; } ?> </div> <div style="float:right;"> <a href="#">My Profile</a> | <a href="logout.php">Logout</a></div> </div> <div class="Labelcon"> <div class="Label">Property ID</div> <div class="Label">Property Type</div> <div class="Label">Property Deal Type</div> <div class="Label">Property Owner</div> <div class="Label">Proposted Price</div> </div> <?php if(!isset($_SESSION['email'])) { header('Location:../../spd/myaccount.php'); } else { $query = "SELECT properties.PropertyID, properties.PropertyType, properties.PropertyDealType, properties.Status, properties.PropostedPrice, remoteemployees.RemoteEmployeeFullName, propertyowners.PropertyOwnerName, propertydealers.PropertyDealerName FROM remoteemployees, propertyowners, propertydealers, properties WHERE properties.PropertyOwnerID=propertyowners.PropertyOwnerID AND properties.PropertyDealerID=propertydealers.PropertyDealerID AND remoteemployees.RemoteEmployeeID=properties.RemoteEmployeeID ORDER BY properties.PropertyID "; $query_run = $connection->query($query); if( $connection->error ) exit( $connection->error ); while($row=$query_run->fetch_assoc()) { ?> <div class="Valuecon"> <div class="Value"><?php echo $row['PropertyID'] ?></div> <div class="Value"><?php echo $row['PropertyType'] ?></div> <div class="Value"><?php echo $row['PropertyDealType']?></div> <div class="Value"><?php echo $row['PropertyOwnerName'] ?></div> <div class="Value"><?php echo $row['PropostedPrice'];?></div> </div> <?php } }?> </div> </body> </html>

    Read the article

  • Inheritance issue

    - by VenkateshGudipati
    hi Friends i am facing a issue in Inheritance i have a interface called Irewhizz interface irewhzz { void object save(object obj); void object getdata(object obj); } i write definition in different class like public user:irewhzz { public object save(object obj); { ....... } public object getdata(object obj); { ....... } } this is antoher class public client:irewhzz { public object save(object obj); { ....... } public object getdata(object obj); { ....... } } now i have different classes like public partial class RwUser { #region variables IRewhizzDataHelper irewhizz; IRewhizzRelationDataHelper irewhizzrelation; private string _firstName; private string _lastName; private string _middleName; private string _email; private string _website; private int _addressId; private string _city; private string _zipcode; private string _phone; private string _fax; //private string _location; private string _aboutMe; private string _username; private string _password; private string _securityQuestion; private string _securityQAnswer; private Guid _user_Id; private long _rwuserid; private byte[] _image; private bool _changepassword; private string _mobilephone; private int _role; #endregion //IRewhizz is the interface and its functions are implimented by UserDataHelper class //RwUser Class is inheriting the UserDataHelper Properties and functions. //Here UserDataHelper functions are called with Irewhizz Interface Object but not with the //UserDataHelper class Object It will resolves the unit testing conflict. #region Constructors public RwUser() : this(new UserDataHelper(), new RewhizzRelationalDataHelper()) { } public RwUser(IRewhizzDataHelper repositary, IRewhizzRelationDataHelper relationrepositary) { irewhizz = repositary; irewhizzrelation = relationrepositary; } #endregion #region Properties public int Role { get { return _role; } set { _role = value; } } public string MobilePhone { get { return _mobilephone; } set { _mobilephone = value; } } public bool ChangePassword { get { return _changepassword; } set { _changepassword = value; } } public byte[] Image { get { return _image; } set { _image = value; } } public string FirstName { get { return _firstName; } set { _firstName = value; } } public string LastName { get { return _lastName; } set { _lastName = value; } } public string MiddleName { get { return _middleName; } set { _middleName = value; } } public string Email { get { return _email; } set { _email = value; } } public string Website { get { return _website; } set { _website = value; } } public int AddressId { get { return _addressId; } set { _addressId = value; } } public string City { get { return _city; } set { _city = value; } } public string Zipcode { get { return _zipcode; } set { _zipcode = value; } } public string Phone { get { return _phone; } set { _phone = value; } } public string Fax { get { return _fax; } set { _fax = value; } } //public string Location //{ // get // { // return _location; // } // set // { // _location = value; // } //} public string AboutMe { get { return _aboutMe; } set { _aboutMe = value; } } public string username { get { return _username; } set { _username = value; } } public string password { get { return _password; } set { _password = value; } } public string SecurityQuestion { get { return _securityQuestion; } set { _securityQuestion = value; } } public string SecurityQAnswer { get { return _securityQAnswer; } set { _securityQAnswer = value; } } public Guid UserID { get { return _user_Id; } set { _user_Id = value; } } public long RwUserID { get { return _rwuserid; } set { _rwuserid = value; } } #endregion #region MemberFunctions // DataHelperDataContext db = new DataHelperDataContext(); // RewhizzDataHelper rwdh=new RewhizzDataHelper(); //It saves user information entered by user and returns the id of that user public object saveUserInfo(RwUser userObj) { userObj.UserID = irewhizzrelation.GetUserId(username); var res = irewhizz.saveData(userObj); return res; } //It returns the security questions for user registration } public class Agent : RwUser { IRewhizzDataHelper irewhizz; IRewhizzRelationDataHelper irewhizzrelation; private int _roleid; private int _speclisationid; private int[] _language; private string _brokaragecompany; private int _loctionType_lk; private string _rolename; private int[] _specialization; private string _agentID; private string _expDate; private string _regstates; private string _selLangs; private string _selSpels; private string _locations; public string Locations { get { return _locations; } set { _locations = value; } } public string SelectedLanguages { get { return _selLangs; } set { _selLangs = value; } } public string SelectedSpecialization { get { return _selSpels; } set { _selSpels = value; } } public string RegisteredStates { get { return _regstates; } set { _regstates = value; } } //private string _registeredStates; public string AgentID { get { return _agentID; } set { _agentID = value; } } public string ExpDate { get { return _expDate; } set { _expDate = value; } } private int[] _registeredStates; public SelectList RegisterStates { set; get; } public SelectList Languages { set; get; } public SelectList Specializations { set; get; } public int[] RegisterdStates { get { return _registeredStates; } set { _registeredStates = value; } } //public string RegisterdStates //{ // get // { // return _registeredStates; // } // set // { // _registeredStates = value; // } //} public int RoleId { get { return _roleid; } set { _roleid = value; } } public int SpeclisationId { get { return _speclisationid; } set { _speclisationid = value; } } public int[] Language { get { return _language; } set { _language = value; } } public int LocationTypeId { get { return _loctionType_lk; } set { _loctionType_lk = value; } } public string BrokarageCompany { get { return _brokaragecompany; } set { _brokaragecompany = value; } } public string Rolename { get { return _rolename; } set { _rolename = value; } } public int[] Specialization { get { return _specialization; } set { _specialization = value; } } public Agent() : this(new AgentDataHelper(), new RewhizzRelationalDataHelper()) { } public Agent(IRewhizzDataHelper repositary, IRewhizzRelationDataHelper relationrepositary) { irewhizz = repositary; irewhizzrelation = relationrepositary; } public void inviteclient() { //Code related to mailing } //DataHelperDataContext dataObj = new DataHelperDataContext(); //#region IRewhizzFactory Members //public List<object> getAgentInfo(string username) //{ // var res=dataObj.GetCompleteUserDetails(username); // return res.ToList(); // throw new NotImplementedException(); //} //public List<object> GetRegisterAgentData(string username) //{ // var res= dataObj.RegisteredUserdetails(username); // return res.ToList(); //} //public void saveAgentInfo(string username, string password, string firstname, string lastname, string middlename, string securityquestion, string securityQanswer) //{ // User userobj=new User(); // var result = dataObj.rw_Users_InsertUserInfo(firstname, middlename, lastname, dataObj.GetUserId(username), securityquestion, securityquestionanswer); // throw new NotImplementedException(); //} //#endregion public Agent updateData(Agent objectId) { objectId.UserID = irewhizzrelation.GetUserId(objectId.username); objectId = (Agent)irewhizz.updateData(objectId); return objectId; } public Agent GetAgentData(Agent agentodj) { agentodj.UserID = irewhizzrelation.GetUserId(agentodj.username); agentodj = (Agent)irewhizz.getData(agentodj); if (agentodj.RoleId != 0) agentodj.Rolename = (string)(string)irewhizzrelation.getValue(agentodj.RoleId); if (agentodj.RegisterdStates.Count() != 0) { List<SelectListItem> list = new List<SelectListItem>(); string regstates = ""; foreach (int i in agentodj.RegisterdStates) { SelectListItem listitem = new SelectListItem(); listitem.Value = i.ToString(); listitem.Text = (string)irewhizzrelation.getValue(i); list.Add(listitem); regstates += (string)irewhizzrelation.getValue(i) + ","; } SelectList selectlist = new SelectList(list, "Value", "Text"); agentodj.RegisterStates = selectlist; if(regstates!=null) agentodj.RegisteredStates = regstates.Remove(regstates.Length - 1); } if (agentodj.Language.Count() != 0) { List<SelectListItem> list = new List<SelectListItem>(); string selectedlang = ""; foreach (int i in agentodj.Language) { SelectListItem listitem = new SelectListItem(); listitem.Value = i.ToString(); listitem.Text = (string)irewhizzrelation.getValue(i); list.Add(listitem); selectedlang += (string)irewhizzrelation.getValue(i) + ","; } SelectList selectlist = new SelectList(list, "Value", "Text"); agentodj.Languages = selectlist; // agentodj.SelectedLanguages = selectedlang; } if (agentodj.Specialization.Count() != 0) { List<SelectListItem> list = new List<SelectListItem>(); string selectedspel = ""; foreach (int i in agentodj.Specialization) { SelectListItem listitem = new SelectListItem(); listitem.Value = i.ToString(); listitem.Text = (string)irewhizzrelation.getValue(i); list.Add(listitem); selectedspel += (string)irewhizzrelation.getValue(i) + ","; } SelectList selectlist = new SelectList(list, "Value", "Text"); agentodj.Specializations = selectlist; //agentodj.SelectedSpecialization = selectedspel; } return agentodj; } public void SaveImage(byte[] pic, String username) { irewhizzrelation.SaveImage(pic, username); } } now the issue is when ever i am calling agent class it is given error like null reference exception for rwuser class can any body give the solution thanks in advance

    Read the article

  • PHP mail with multiple attachments and message in HTML format [closed]

    - by Jason
    I am new to PHP, so please don't mind if my question is silly. I would to like to make a PHP to send email with numerous attachments and the message of the email will be in HTML format. <html> <body> <form action="mail.php" method="post"> <table> <tr> <td><label>Name:</label></td> <td><input type="text" name="name" /></td> </tr> <tr> <td><label>Your Email:</label></td> <td><input type="text" name="email" /></td> </tr> <tr> <td><label>Attachment:</label></td> <td><input type="file" name="Attach" /></td> </tr> <tr> <td><input type="submit" value="Submit" /></td> </tr> </table> </form> <script language="javascript"> $(document).ready(function() { $("form").submit(function(){ $.ajax({ type: "POST", url: 'mail.php', dataType: 'json', data: { name: $('#name').val(), email: $('#email').val(), Attach: $('#Attach').val(), }, success: function(json){ $(".error, .success").remove(); if (json['error']){ $("form").after(json['error']); } if (json['success']){ $("form").remove(); $(".leftColWrap").append(json['success']); } } }); return false; }); }); </script> </body> </html> This is my HTML for filing in the information. And below is the mail.php <?php session_cache_limiter('nocache'); header('Expires: ' . gmdate('r', 0)); header('Content-type: application/json'); $timeout = time()+60*60*24*30; setcookie(Form, date("F jS - g:i a"), $timeout); $name=$_POST['name']; $email=$_POST['email']; $to="[email protected]"; //*** Uniqid Session ***// $Sid = md5(uniqid(time())); $headers = ""; $headers .= "From: $email \n"; $headers .= "MIME-Version: 1.0\n"; $headers .= "Content-Type: multipart/mixed; boundary=\"".$Sid."\"\n\n"; $headers .= "This is a multi-part message in MIME format.\n"; $headers .= "--".$Sid."\n"; $headers .= "Content-type: text/html; charset=utf-8\n"; $headers .= "Content-Transfer-Encoding: 7bit\n\n"; //*** Attachment ***// if($_FILES["fileAttach"]["name"] != "") { $FilesName = $_FILES["fileAttach"]["name"]; $Content = chunk_split(base64_encode(file_get_contents($_FILES["fileAttach"]["tmp_name"]))); $headers .= "--".$Sid."\n"; $headers .= "Content-Type: application/octet-eam; name=\"".$FilesName."\"\n"; $headers .= "Content-Transfer-Encoding: base64\n"; $headers .= "Content-Disposition: attachment; filename=\"".$FilesName."\"\n\n"; $headers .= $Content."\n\n"; } $message_to=" <html><body> <table class='page-head' align='center' width='100%'> <tr> <td class='left'> <h1>ABC</h1></td> <td class='right' width='63'> <img src='http://xxx/images/logo.png' /></td> </tr> </table><br /><br /> $name ($email) has just sent you an e-mail. </body></html>"; $message_from="<html><body> <table class='page-head' align='center' width='100%'> <tr> <td class='left'> <h1>ABC</h1></td> <td class='right' width='63'> <img src='http://xxx/images/logo.png' /></td> </tr> </table><br /><br /> Thanks for sending the email. </body></html>"; if ($name == "" || $email == "") { $error = "<font color=\"red\">Please fill in all the required fields.</font>"; } elseif (isset($_COOKIE['Form'])) { $error = "You have already sent the email. Please try again later."; } else { mail($to,"A new email from: $name",$message_to,$headers); mail($email,"Thank you for send the email",$message_from,$headers); $success = "Emai sent successfully!"; } $json = array('error' => $error, 'success' => $success); print(json_encode($json)); ?> May someone give some advises on the code? Thanks a lot.

    Read the article

  • MySQL Syslog Audit Plugin

    - by jonathonc
    This post shows the construction process of the Syslog Audit plugin that was presented at MySQL Connect 2012. It is based on an environment that has the appropriate development tools enabled including gcc,g++ and cmake. It also assumes you have downloaded the MySQL source code (5.5.16 or higher) and have compiled and installed the system into the /usr/local/mysql directory ready for use.  The information provided below is designed to show the different components that make up a plugin, and specifically an audit type plugin, and how it comes together to be used within the MySQL service. The MySQL Reference Manual contains information regarding the plugin API and how it can be used, so please refer there for more detailed information. The code in this post is designed to give the simplest information necessary, so handling every return code, managing race conditions etc is not part of this example code. Let's start by looking at the most basic implementation of our plugin code as seen below: /*    Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.    Author:  Jonathon Coombes    Licence: GPL    Description: An auditing plugin that logs to syslog and                 can adjust the loglevel via the system variables. */ #include <stdio.h> #include <string.h> #include <mysql/plugin_audit.h> #include <syslog.h> There is a commented header detailing copyright/licencing and meta-data information and then the include headers. The two important include statements for our plugin are the syslog.h plugin, which gives us the structures for syslog, and the plugin_audit.h include which has details regarding the audit specific plugin api. Note that we do not need to include the general plugin header plugin.h, as this is done within the plugin_audit.h file already. To implement our plugin within the current implementation we need to add it into our source code and compile. > cd /usr/local/src/mysql-5.5.28/plugin > mkdir audit_syslog > cd audit_syslog A simple CMakeLists.txt file is created to manage the plugin compilation: MYSQL_ADD_PLUGIN(audit_syslog audit_syslog.cc MODULE_ONLY) Run the cmake  command at the top level of the source and then you can compile the plugin using the 'make' command. This results in a compiled audit_syslog.so library, but currently it is not much use to MySQL as there is no level of api defined to communicate with the MySQL service. Now we need to define the general plugin structure that enables MySQL to recognise the library as a plugin and be able to install/uninstall it and have it show up in the system. The structure is defined in the plugin.h file in the MySQL source code.  /*   Plugin library descriptor */ mysql_declare_plugin(audit_syslog) {   MYSQL_AUDIT_PLUGIN,           /* plugin type                    */   &audit_syslog_descriptor,     /* descriptor handle               */   "audit_syslog",               /* plugin name                     */   "Author Name",                /* author                          */   "Simple Syslog Audit",        /* description                     */   PLUGIN_LICENSE_GPL,           /* licence                         */   audit_syslog_init,            /* init function     */   audit_syslog_deinit,          /* deinit function */   0x0001,                       /* plugin version                  */   NULL,                         /* status variables        */   NULL,                         /* system variables                */   NULL,                         /* no reserves                     */   0,                            /* no flags                        */ } mysql_declare_plugin_end; The general plugin descriptor above is standard for all plugin types in MySQL. The plugin type is defined along with the init/deinit functions and interface methods into the system for sharing information, and various other metadata information. The descriptors have an internally recognised version number so that plugins can be matched against the api on the running server. The other details are usually related to the type-specific methods and structures to implement the plugin. Each plugin has a type-specific descriptor as well which details how the plugin is implemented for the specific purpose of that plugin type. /*   Plugin type-specific descriptor */ static struct st_mysql_audit audit_syslog_descriptor= {   MYSQL_AUDIT_INTERFACE_VERSION,                        /* interface version    */   NULL,                                                 /* release_thd function */   audit_syslog_notify,                                  /* notify function      */   { (unsigned long) MYSQL_AUDIT_GENERAL_CLASSMASK |                     MYSQL_AUDIT_CONNECTION_CLASSMASK }  /* class mask           */ }; In this particular case, the release_thd function has not been defined as it is not required. The important method for auditing is the notify function which is activated when an event occurs on the system. The notify function is designed to activate on an event and the implementation will determine how it is handled. For the audit_syslog plugin, the use of the syslog feature sends all events to the syslog for recording. The class mask allows us to determine what type of events are being seen by the notify function. There are currently two major types of event: 1. General Events: This includes general logging, errors, status and result type events. This is the main one for tracking the queries and operations on the database. 2. Connection Events: This group is based around user logins. It monitors connections and disconnections, but also if somebody changes user while connected. With most audit plugins, the principle behind the plugin is to track changes to the system over time and counters can be an important part of this process. The next step is to define and initialise the counters that are used to track the events in the service. There are 3 counters defined in total for our plugin - the # of general events, the # of connection events and the total number of events.  static volatile int total_number_of_calls; /* Count MYSQL_AUDIT_GENERAL_CLASS event instances */ static volatile int number_of_calls_general; /* Count MYSQL_AUDIT_CONNECTION_CLASS event instances */ static volatile int number_of_calls_connection; The init and deinit functions for the plugin are there to be called when the plugin is activated and when it is terminated. These offer the best option to initialise the counters for our plugin: /*  Initialize the plugin at server start or plugin installation. */ static int audit_syslog_init(void *arg __attribute__((unused))) {     openlog("mysql_audit:",LOG_PID|LOG_PERROR|LOG_CONS,LOG_USER);     total_number_of_calls= 0;     number_of_calls_general= 0;     number_of_calls_connection= 0;     return(0); } The init function does a call to openlog to initialise the syslog functionality. The parameters are the service to log under ("mysql_audit" in this case), the syslog flags and the facility for the logging. Then each of the counters are initialised to zero and a success is returned. If the init function is not defined, it will return success by default. /*  Terminate the plugin at server shutdown or plugin deinstallation. */ static int audit_syslog_deinit(void *arg __attribute__((unused))) {     closelog();     return(0); } The deinit function will simply close our syslog connection and return success. Note that the syslog functionality is part of the glibc libraries and does not require any external factors.  The function names are what we define in the general plugin structure, so these have to match otherwise there will be errors. The next step is to implement the event notifier function that was defined in the type specific descriptor (audit_syslog_descriptor) which is audit_syslog_notify. /* Event notifier function */ static void audit_syslog_notify(MYSQL_THD thd __attribute__((unused)), unsigned int event_class, const void *event) { total_number_of_calls++; if (event_class == MYSQL_AUDIT_GENERAL_CLASS) { const struct mysql_event_general *event_general= (const struct mysql_event_general *) event; number_of_calls_general++; syslog(audit_loglevel,"%lu: User: %s Command: %s Query: %s\n", event_general->general_thread_id, event_general->general_user, event_general->general_command, event_general->general_query ); } else if (event_class == MYSQL_AUDIT_CONNECTION_CLASS) { const struct mysql_event_connection *event_connection= (const struct mysql_event_connection *) event; number_of_calls_connection++; syslog(audit_loglevel,"%lu: User: %s@%s[%s] Event: %d Status: %d\n", event_connection->thread_id, event_connection->user, event_connection->host, event_connection->ip, event_connection->event_subclass, event_connection->status ); } }   In the case of an event, the notifier function is called. The first step is to increment the total number of events that have occurred in our database.The event argument is then cast into the appropriate event structure depending on the class type, of general event or connection event. The event type counters are incremented and details are sent via the syslog() function out to the system log. There are going to be different line formats and information returned since the general events have different data compared to the connection events, even though some of the details overlap, for example, user, thread id, host etc. On compiling the code now, there should be no errors and the resulting audit_syslog.so can be loaded into the server and ready to use. Log into the server and type: mysql> INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so'; This will install the plugin and will start updating the syslog immediately. Note that the audit plugin attaches to the immediate thread and cannot be uninstalled while that thread is active. This means that you cannot run the UNISTALL command until you log into a different connection (thread) on the server. Once the plugin is loaded, the system log will show output such as the following: Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:21 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: INSTALL PLUGIN audit_syslog SONAME 'audit_syslog.so' Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: show tables Oct  8 15:33:40 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: show tables Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: (null)  Query: select * from t1 Oct  8 15:33:43 machine mysql_audit:[8337]: 87: User: root[root] @ localhost []  Command: Query  Query: select * from t1 It appears that two of each event is being shown, but in actuality, these are two separate event types - the result event and the status event. This could be refined further by changing the audit_syslog_notify function to handle the different event sub-types in a different manner.  So far, it seems that the logging is working with events showing up in the syslog output. The issue now is that the counters created earlier to track the number of events by type are not accessible when the plugin is being run. Instead there needs to be a way to expose the plugin specific information to the service and vice versa. This could be done via the information_schema plugin api, but for something as simple as counters, the obvious choice is the system status variables. This is done using the standard structure and the declaration: /*  Plugin status variables for SHOW STATUS */ static struct st_mysql_show_var audit_syslog_status[]= {   { "Audit_syslog_total_calls",     (char *) &total_number_of_calls,     SHOW_INT },   { "Audit_syslog_general_events",     (char *) &number_of_calls_general,     SHOW_INT },   { "Audit_syslog_connection_events",     (char *) &number_of_calls_connection,     SHOW_INT },   { 0, 0, SHOW_INT } };   The structure is simply the name that will be displaying in the mysql service, the address of the associated variables, and the data type being used for the counter. It is finished with a blank structure to show that there are no more variables. Remember that status variables may have the same name for variables from other plugin, so it is considered appropriate to add the plugin name at the start of the status variable name to avoid confusion. Looking at the status variables in the mysql client shows something like the following: mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 2     | | Audit_syslog_total_calls       | 3     | +--------------------------------+-------+ 3 rows in set (0.00 sec) The final connectivity piece for the plugin is to allow the interactive change of the logging level between the plugin and the system. This requires the ability to send changes via the mysql service through to the plugin. This is done using the system variables interface and defining a single variable to keep track of the active logging level for the facility. /* Plugin system variables for SHOW VARIABLES */ static MYSQL_SYSVAR_STR(loglevel, audit_loglevel,                         PLUGIN_VAR_RQCMDARG,                         "User can specify the log level for auditing",                         audit_loglevel_check, audit_loglevel_update, "LOG_NOTICE"); static struct st_mysql_sys_var* audit_syslog_sysvars[] = {     MYSQL_SYSVAR(loglevel),     NULL }; So now the system variable 'loglevel' is defined for the plugin and associated to the global variable 'audit_loglevel'. The check or validation function is defined to make sure that no garbage values are attempted in the update of the variable. The update function is used to save the new value to the variable. Note that the audit_syslog_sysvars structure is defined in the general plugin descriptor to associate the link between the plugin and the system and how much they interact. Next comes the implementation of the validation function and the update function for the system variable. It is worth noting that if you have a simple numeric such as integers for the variable types, the validate function is often not required as MySQL will handle the automatic check and validation of simple types. /* longest valid value */ #define MAX_LOGLEVEL_SIZE 100 /* hold the valid values */ static const char *possible_modes[]= { "LOG_ERROR", "LOG_WARNING", "LOG_NOTICE", NULL };  static int audit_loglevel_check(     THD*                        thd,    /*!< in: thread handle */     struct st_mysql_sys_var*    var,    /*!< in: pointer to system                                         variable */     void*                       save,   /*!< out: immediate result                                         for update function */     struct st_mysql_value*      value)  /*!< in: incoming string */ {     char buff[MAX_LOGLEVEL_SIZE];     const char *str;     const char **found;     int length;     length= sizeof(buff);     if (!(str= value->val_str(value, buff, &length)))         return 1;     /*         We need to return a pointer to a locally allocated value in "save".         Here we pick to search for the supplied value in an global array of         constant strings and return a pointer to one of them.         The other possiblity is to use the thd_alloc() function to allocate         a thread local buffer instead of the global constants.     */     for (found= possible_modes; *found; found++)     {         if (!strcmp(*found, str))         {             *(const char**)save= *found;             return 0;         }     }     return 1; } The validation function is simply to take the value being passed in via the SET GLOBAL VARIABLE command and check if it is one of the pre-defined values allowed  in our possible_values array. If it is found to be valid, then the value is assigned to the save variable ready for passing through to the update function. static void audit_loglevel_update(     THD*                        thd,        /*!< in: thread handle */     struct st_mysql_sys_var*    var,        /*!< in: system variable                                             being altered */     void*                       var_ptr,    /*!< out: pointer to                                             dynamic variable */     const void*                 save)       /*!< in: pointer to                                             temporary storage */ {     /* assign the new value so that the server can read it */     *(char **) var_ptr= *(char **) save;     /* assign the new value to the internal variable */     audit_loglevel= *(char **) save; } Since all the validation has been done already, the update function is quite simple for this plugin. The first part is to update the system variable pointer so that the server can read the value. The second part is to update our own global plugin variable for tracking the value. Notice that the save variable is passed in as a void type to allow handling of various data types, so it must be cast to the appropriate data type when assigning it to the variables. Looking at how the latest changes affect the usage of the plugin and the interaction within the server shows: mysql> show global variables like "audit%"; +-----------------------+------------+ | Variable_name         | Value      | +-----------------------+------------+ | audit_syslog_loglevel | LOG_NOTICE | +-----------------------+------------+ 1 row in set (0.00 sec) mysql> set global audit_syslog_loglevel="LOG_ERROR"; Query OK, 0 rows affected (0.00 sec) mysql> show global status like "audit%"; +--------------------------------+-------+ | Variable_name                  | Value | +--------------------------------+-------+ | Audit_syslog_connection_events | 1     | | Audit_syslog_general_events    | 11    | | Audit_syslog_total_calls       | 12    | +--------------------------------+-------+ 3 rows in set (0.00 sec) mysql> show global variables like "audit%"; +-----------------------+-----------+ | Variable_name         | Value     | +-----------------------+-----------+ | audit_syslog_loglevel | LOG_ERROR | +-----------------------+-----------+ 1 row in set (0.00 sec)   So now we have a plugin that will audit the events on the system and log the details to the system log. It allows for interaction to see the number of different events within the server details and provides a mechanism to change the logging level interactively via the standard system methods of the SET command. A more complex auditing plugin may have more detailed code, but each of the above areas is what will be involved and simply expanded on to add more functionality. With the above skeleton code, it is now possible to create your own audit plugins to implement your own auditing requirements. If, however, you are not of the coding persuasion, then you could always consider the option of the MySQL Enterprise Audit plugin that is available to purchase.

    Read the article

  • Silverlight Recruiting Application Part 5 - Jobs Module / View

    Now we starting getting into a more code-heavy portion of this series, thankfully though this means the groundwork is all set for the most part and after adding the modules we will have a complete application that can be provided with full source. The Jobs module will have two concerns- adding and maintaining jobs that can then be broadcast out to the website. How they are displayed on the site will be handled by our admin system (which will just poll from this common database), so we aren't too concerned with that, but rather with getting the information into the system and allowing the backend administration/HR users to keep things up to date. Since there is a fair bit of information that we want to display, we're going to move editing to a separate view so we can get all that information in an easy-to-use spot. With all the files created for this module, the project looks something like this: And now... on to the code. XAML for the Job Posting View All we really need for the Job Posting View is a RadGridView and a few buttons. This will let us both show off records and perform operations on the records without much hassle. That XAML is going to look something like this: 01.<Grid x:Name="LayoutRoot" 02.Background="White"> 03.<Grid.RowDefinitions> 04.<RowDefinition Height="30" /> 05.<RowDefinition /> 06.</Grid.RowDefinitions> 07.<StackPanel Orientation="Horizontal"> 08.<Button x:Name="xAddRecordButton" 09.Content="Add Job" 10.Width="120" 11.cal:Click.Command="{Binding AddRecord}" 12.telerik:StyleManager.Theme="Windows7" /> 13.<Button x:Name="xEditRecordButton" 14.Content="Edit Job" 15.Width="120" 16.cal:Click.Command="{Binding EditRecord}" 17.telerik:StyleManager.Theme="Windows7" /> 18.</StackPanel> 19.<telerikGrid:RadGridView x:Name="xJobsGrid" 20.Grid.Row="1" 21.IsReadOnly="True" 22.AutoGenerateColumns="False" 23.ColumnWidth="*" 24.RowDetailsVisibilityMode="VisibleWhenSelected" 25.ItemsSource="{Binding MyJobs}" 26.SelectedItem="{Binding SelectedJob, Mode=TwoWay}" 27.command:SelectedItemChangedEventClass.Command="{Binding SelectedItemChanged}"> 28.<telerikGrid:RadGridView.Columns> 29.<telerikGrid:GridViewDataColumn Header="Job Title" 30.DataMemberBinding="{Binding JobTitle}" 31.UniqueName="JobTitle" /> 32.<telerikGrid:GridViewDataColumn Header="Location" 33.DataMemberBinding="{Binding Location}" 34.UniqueName="Location" /> 35.<telerikGrid:GridViewDataColumn Header="Resume Required" 36.DataMemberBinding="{Binding NeedsResume}" 37.UniqueName="NeedsResume" /> 38.<telerikGrid:GridViewDataColumn Header="CV Required" 39.DataMemberBinding="{Binding NeedsCV}" 40.UniqueName="NeedsCV" /> 41.<telerikGrid:GridViewDataColumn Header="Overview Required" 42.DataMemberBinding="{Binding NeedsOverview}" 43.UniqueName="NeedsOverview" /> 44.<telerikGrid:GridViewDataColumn Header="Active" 45.DataMemberBinding="{Binding IsActive}" 46.UniqueName="IsActive" /> 47.</telerikGrid:RadGridView.Columns> 48.</telerikGrid:RadGridView> 49.</Grid> I'll explain what's happening here by line numbers: Lines 11 and 16: Using the same type of click commands as we saw in the Menu module, we tie the button clicks to delegate commands in the viewmodel. Line 25: The source for the jobs will be a collection in the viewmodel. Line 26: We also bind the selected item to a public property from the viewmodel for use in code. Line 27: We've turned the event into a command so we can handle it via code in the viewmodel. So those first three probably make sense to you as far as Silverlight/WPF binding magic is concerned, but for line 27... This actually comes from something I read onDamien Schenkelman's blog back in the day for creating an attached behavior from any event. So, any time you see me using command:Whatever.Command, the backing for it is actually something like this: SelectedItemChangedEventBehavior.cs: 01.public class SelectedItemChangedEventBehavior : CommandBehaviorBase<Telerik.Windows.Controls.DataControl> 02.{ 03.public SelectedItemChangedEventBehavior(DataControl element) 04.: base(element) 05.{ 06.element.SelectionChanged += new EventHandler<SelectionChangeEventArgs>(element_SelectionChanged); 07.} 08.void element_SelectionChanged(object sender, SelectionChangeEventArgs e) 09.{ 10.// We'll only ever allow single selection, so will only need item index 0 11.base.CommandParameter = e.AddedItems[0]; 12.base.ExecuteCommand(); 13.} 14.} SelectedItemChangedEventClass.cs: 01.public class SelectedItemChangedEventClass 02.{ 03.#region The Command Stuff 04.public static ICommand GetCommand(DependencyObject obj) 05.{ 06.return (ICommand)obj.GetValue(CommandProperty); 07.} 08.public static void SetCommand(DependencyObject obj, ICommand value) 09.{ 10.obj.SetValue(CommandProperty, value); 11.} 12.public static readonly DependencyProperty CommandProperty = 13.DependencyProperty.RegisterAttached("Command", typeof(ICommand), 14.typeof(SelectedItemChangedEventClass), new PropertyMetadata(OnSetCommandCallback)); 15.public static void OnSetCommandCallback(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) 16.{ 17.DataControl element = dependencyObject as DataControl; 18.if (element != null) 19.{ 20.SelectedItemChangedEventBehavior behavior = GetOrCreateBehavior(element); 21.behavior.Command = e.NewValue as ICommand; 22.} 23.} 24.#endregion 25.public static SelectedItemChangedEventBehavior GetOrCreateBehavior(DataControl element) 26.{ 27.SelectedItemChangedEventBehavior behavior = element.GetValue(SelectedItemChangedEventBehaviorProperty) as SelectedItemChangedEventBehavior; 28.if (behavior == null) 29.{ 30.behavior = new SelectedItemChangedEventBehavior(element); 31.element.SetValue(SelectedItemChangedEventBehaviorProperty, behavior); 32.} 33.return behavior; 34.} 35.public static SelectedItemChangedEventBehavior GetSelectedItemChangedEventBehavior(DependencyObject obj) 36.{ 37.return (SelectedItemChangedEventBehavior)obj.GetValue(SelectedItemChangedEventBehaviorProperty); 38.} 39.public static void SetSelectedItemChangedEventBehavior(DependencyObject obj, SelectedItemChangedEventBehavior value) 40.{ 41.obj.SetValue(SelectedItemChangedEventBehaviorProperty, value); 42.} 43.public static readonly DependencyProperty SelectedItemChangedEventBehaviorProperty = 44.DependencyProperty.RegisterAttached("SelectedItemChangedEventBehavior", 45.typeof(SelectedItemChangedEventBehavior), typeof(SelectedItemChangedEventClass), null); 46.} These end up looking very similar from command to command, but in a nutshell you create a command based on any event, determine what the parameter for it will be, then execute. It attaches via XAML and ties to a DelegateCommand in the viewmodel, so you get the full event experience (since some controls get a bit event-rich for added functionality). Simple enough, right? Viewmodel for the Job Posting View The Viewmodel is going to need to handle all events going back and forth, maintaining interactions with the data we are using, and both publishing and subscribing to events. Rather than breaking this into tons of little pieces, I'll give you a nice view of the entire viewmodel and then hit up the important points line-by-line: 001.public class JobPostingViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005.public DelegateCommand<object> AddRecord { get; set; } 006.public DelegateCommand<object> EditRecord { get; set; } 007.public DelegateCommand<object> SelectedItemChanged { get; set; } 008.public RecruitingContext context; 009.private QueryableCollectionView _myJobs; 010.public QueryableCollectionView MyJobs 011.{ 012.get { return _myJobs; } 013.} 014.private QueryableCollectionView _selectionJobActionHistory; 015.public QueryableCollectionView SelectedJobActionHistory 016.{ 017.get { return _selectionJobActionHistory; } 018.} 019.private JobPosting _selectedJob; 020.public JobPosting SelectedJob 021.{ 022.get { return _selectedJob; } 023.set 024.{ 025.if (value != _selectedJob) 026.{ 027._selectedJob = value; 028.NotifyChanged("SelectedJob"); 029.} 030.} 031.} 032.public SubscriptionToken editToken = new SubscriptionToken(); 033.public SubscriptionToken addToken = new SubscriptionToken(); 034.public JobPostingViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 035.{ 036.// set Unity items 037.this.eventAggregator = eventAgg; 038.this.regionManager = regionmanager; 039.// load our context 040.context = new RecruitingContext(); 041.this._myJobs = new QueryableCollectionView(context.JobPostings); 042.context.Load(context.GetJobPostingsQuery()); 043.// set command events 044.this.AddRecord = new DelegateCommand<object>(this.AddNewRecord); 045.this.EditRecord = new DelegateCommand<object>(this.EditExistingRecord); 046.this.SelectedItemChanged = new DelegateCommand<object>(this.SelectedRecordChanged); 047.SetSubscriptions(); 048.} 049.#region DelegateCommands from View 050.public void AddNewRecord(object obj) 051.{ 052.this.eventAggregator.GetEvent<AddJobEvent>().Publish(true); 053.} 054.public void EditExistingRecord(object obj) 055.{ 056.if (_selectedJob == null) 057.{ 058.this.eventAggregator.GetEvent<NotifyUserEvent>().Publish("No job selected."); 059.} 060.else 061.{ 062.this._myJobs.EditItem(this._selectedJob); 063.this.eventAggregator.GetEvent<EditJobEvent>().Publish(this._selectedJob); 064.} 065.} 066.public void SelectedRecordChanged(object obj) 067.{ 068.if (obj.GetType() == typeof(ActionHistory)) 069.{ 070.// event bubbles up so we don't catch items from the ActionHistory grid 071.} 072.else 073.{ 074.JobPosting job = obj as JobPosting; 075.GrabHistory(job.PostingID); 076.} 077.} 078.#endregion 079.#region Subscription Declaration and Events 080.public void SetSubscriptions() 081.{ 082.EditJobCompleteEvent editComplete = eventAggregator.GetEvent<EditJobCompleteEvent>(); 083.if (editToken != null) 084.editComplete.Unsubscribe(editToken); 085.editToken = editComplete.Subscribe(this.EditCompleteEventHandler); 086.AddJobCompleteEvent addComplete = eventAggregator.GetEvent<AddJobCompleteEvent>(); 087.if (addToken != null) 088.addComplete.Unsubscribe(addToken); 089.addToken = addComplete.Subscribe(this.AddCompleteEventHandler); 090.} 091.public void EditCompleteEventHandler(bool complete) 092.{ 093.if (complete) 094.{ 095.JobPosting thisJob = _myJobs.CurrentEditItem as JobPosting; 096.this._myJobs.CommitEdit(); 097.this.context.SubmitChanges((s) => 098.{ 099.ActionHistory myAction = new ActionHistory(); 100.myAction.PostingID = thisJob.PostingID; 101.myAction.Description = String.Format("Job '{0}' has been edited by {1}", thisJob.JobTitle, "default user"); 102.myAction.TimeStamp = DateTime.Now; 103.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 104.} 105., null); 106.} 107.else 108.{ 109.this._myJobs.CancelEdit(); 110.} 111.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 112.} 113.public void AddCompleteEventHandler(JobPosting job) 114.{ 115.if (job == null) 116.{ 117.// do nothing, new job add cancelled 118.} 119.else 120.{ 121.this.context.JobPostings.Add(job); 122.this.context.SubmitChanges((s) => 123.{ 124.ActionHistory myAction = new ActionHistory(); 125.myAction.PostingID = job.PostingID; 126.myAction.Description = String.Format("Job '{0}' has been added by {1}", job.JobTitle, "default user"); 127.myAction.TimeStamp = DateTime.Now; 128.eventAggregator.GetEvent<AddActionEvent>().Publish(myAction); 129.} 130., null); 131.} 132.this.MakeMeActive(this.regionManager, "MainRegion", "JobPostingsView"); 133.} 134.#endregion 135.public void GrabHistory(int postID) 136.{ 137.context.ActionHistories.Clear(); 138._selectionJobActionHistory = new QueryableCollectionView(context.ActionHistories); 139.context.Load(context.GetHistoryForJobQuery(postID)); 140.} Taking it from the top, we're injecting an Event Aggregator and Region Manager for use down the road and also have the public DelegateCommands (just like in the Menu module). We also grab a reference to our context, which we'll obviously need for data, then set up a few fields with public properties tied to them. We're also setting subscription tokens, which we have not yet seen but I will get into below. The AddNewRecord (50) and EditExistingRecord (54) methods should speak for themselves for functionality, the one thing of note is we're sending events off to the Event Aggregator which some module, somewhere will take care of. Since these aren't entirely relying on one another, the Jobs View doesn't care if anyone is listening, but it will publish AddJobEvent (52), NotifyUserEvent (58) and EditJobEvent (63)regardless. Don't mind the GrabHistory() method so much, that is just grabbing history items (visibly being created in the SubmitChanges callbacks), and adding them to the database. Every action will trigger a history event, so we'll know who modified what and when, just in case. ;) So where are we at? Well, if we click to Add a job, we publish an event, if we edit a job, we publish an event with the selected record (attained through the magic of binding). Where is this all going though? To the Viewmodel, of course! XAML for the AddEditJobView This is pretty straightforward except for one thing, noted below: 001.<Grid x:Name="LayoutRoot" 002.Background="White"> 003.<Grid x:Name="xEditGrid" 004.Margin="10" 005.validationHelper:ValidationScope.Errors="{Binding Errors}"> 006.<Grid.Background> 007.<LinearGradientBrush EndPoint="0.5,1" 008.StartPoint="0.5,0"> 009.<GradientStop Color="#FFC7C7C7" 010.Offset="0" /> 011.<GradientStop Color="#FFF6F3F3" 012.Offset="1" /> 013.</LinearGradientBrush> 014.</Grid.Background> 015.<Grid.RowDefinitions> 016.<RowDefinition Height="40" /> 017.<RowDefinition Height="40" /> 018.<RowDefinition Height="40" /> 019.<RowDefinition Height="100" /> 020.<RowDefinition Height="100" /> 021.<RowDefinition Height="100" /> 022.<RowDefinition Height="40" /> 023.<RowDefinition Height="40" /> 024.<RowDefinition Height="40" /> 025.</Grid.RowDefinitions> 026.<Grid.ColumnDefinitions> 027.<ColumnDefinition Width="150" /> 028.<ColumnDefinition Width="150" /> 029.<ColumnDefinition Width="300" /> 030.<ColumnDefinition Width="100" /> 031.</Grid.ColumnDefinitions> 032.<!-- Title --> 033.<TextBlock Margin="8" 034.Text="{Binding AddEditString}" 035.TextWrapping="Wrap" 036.Grid.Column="1" 037.Grid.ColumnSpan="2" 038.FontSize="16" /> 039.<!-- Data entry area--> 040. 041.<TextBlock Margin="8,0,0,0" 042.Style="{StaticResource LabelTxb}" 043.Grid.Row="1" 044.Text="Job Title" 045.VerticalAlignment="Center" /> 046.<TextBox x:Name="xJobTitleTB" 047.Margin="0,8" 048.Grid.Column="1" 049.Grid.Row="1" 050.Text="{Binding activeJob.JobTitle, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 051.Grid.ColumnSpan="2" /> 052.<TextBlock Margin="8,0,0,0" 053.Grid.Row="2" 054.Text="Location" 055.d:LayoutOverrides="Height" 056.VerticalAlignment="Center" /> 057.<TextBox x:Name="xLocationTB" 058.Margin="0,8" 059.Grid.Column="1" 060.Grid.Row="2" 061.Text="{Binding activeJob.Location, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 062.Grid.ColumnSpan="2" /> 063. 064.<TextBlock Margin="8,11,8,0" 065.Grid.Row="3" 066.Text="Description" 067.TextWrapping="Wrap" 068.VerticalAlignment="Top" /> 069. 070.<TextBox x:Name="xDescriptionTB" 071.Height="84" 072.TextWrapping="Wrap" 073.ScrollViewer.VerticalScrollBarVisibility="Auto" 074.Grid.Column="1" 075.Grid.Row="3" 076.Text="{Binding activeJob.Description, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 077.Grid.ColumnSpan="2" /> 078.<TextBlock Margin="8,11,8,0" 079.Grid.Row="4" 080.Text="Requirements" 081.TextWrapping="Wrap" 082.VerticalAlignment="Top" /> 083. 084.<TextBox x:Name="xRequirementsTB" 085.Height="84" 086.TextWrapping="Wrap" 087.ScrollViewer.VerticalScrollBarVisibility="Auto" 088.Grid.Column="1" 089.Grid.Row="4" 090.Text="{Binding activeJob.Requirements, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 091.Grid.ColumnSpan="2" /> 092.<TextBlock Margin="8,11,8,0" 093.Grid.Row="5" 094.Text="Qualifications" 095.TextWrapping="Wrap" 096.VerticalAlignment="Top" /> 097. 098.<TextBox x:Name="xQualificationsTB" 099.Height="84" 100.TextWrapping="Wrap" 101.ScrollViewer.VerticalScrollBarVisibility="Auto" 102.Grid.Column="1" 103.Grid.Row="5" 104.Text="{Binding activeJob.Qualifications, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}" 105.Grid.ColumnSpan="2" /> 106.<!-- Requirements Checkboxes--> 107. 108.<CheckBox x:Name="xResumeRequiredCB" Margin="8,8,8,15" 109.Content="Resume Required" 110.Grid.Row="6" 111.Grid.ColumnSpan="2" 112.IsChecked="{Binding activeJob.NeedsResume, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 113. 114.<CheckBox x:Name="xCoverletterRequiredCB" Margin="8,8,8,15" 115.Content="Cover Letter Required" 116.Grid.Column="2" 117.Grid.Row="6" 118.IsChecked="{Binding activeJob.NeedsCV, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 119. 120.<CheckBox x:Name="xOverviewRequiredCB" Margin="8,8,8,15" 121.Content="Overview Required" 122.Grid.Row="7" 123.Grid.ColumnSpan="2" 124.IsChecked="{Binding activeJob.NeedsOverview, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 125. 126.<CheckBox x:Name="xJobActiveCB" Margin="8,8,8,15" 127.Content="Job is Active" 128.Grid.Column="2" 129.Grid.Row="7" 130.IsChecked="{Binding activeJob.IsActive, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> 131. 132.<!-- Buttons --> 133. 134.<Button x:Name="xAddEditButton" Margin="8,8,0,10" 135.Content="{Binding AddEditButtonString}" 136.cal:Click.Command="{Binding AddEditCommand}" 137.Grid.Column="2" 138.Grid.Row="8" 139.HorizontalAlignment="Left" 140.Width="125" 141.telerik:StyleManager.Theme="Windows7" /> 142. 143.<Button x:Name="xCancelButton" HorizontalAlignment="Right" 144.Content="Cancel" 145.cal:Click.Command="{Binding CancelCommand}" 146.Margin="0,8,8,10" 147.Width="125" 148.Grid.Column="2" 149.Grid.Row="8" 150.telerik:StyleManager.Theme="Windows7" /> 151.</Grid> 152.</Grid> The 'validationHelper:ValidationScope' line may seem odd. This is a handy little trick for catching current and would-be validation errors when working in this whole setup. This all comes from an approach found on theJoy Of Code blog, although it looks like the story for this will be changing slightly with new advances in SL4/WCF RIA Services, so this section can definitely get an overhaul a little down the road. The code is the fun part of all this, so let us see what's happening under the hood. Viewmodel for the AddEditJobView We are going to see some of the same things happening here, so I'll skip over the repeat info and get right to the good stuff: 001.public class AddEditJobViewModel : ViewModelBase 002.{ 003.private readonly IEventAggregator eventAggregator; 004.private readonly IRegionManager regionManager; 005. 006.public RecruitingContext context; 007. 008.private JobPosting _activeJob; 009.public JobPosting activeJob 010.{ 011.get { return _activeJob; } 012.set 013.{ 014.if (_activeJob != value) 015.{ 016._activeJob = value; 017.NotifyChanged("activeJob"); 018.} 019.} 020.} 021. 022.public bool isNewJob; 023. 024.private string _addEditString; 025.public string AddEditString 026.{ 027.get { return _addEditString; } 028.set 029.{ 030.if (_addEditString != value) 031.{ 032._addEditString = value; 033.NotifyChanged("AddEditString"); 034.} 035.} 036.} 037. 038.private string _addEditButtonString; 039.public string AddEditButtonString 040.{ 041.get { return _addEditButtonString; } 042.set 043.{ 044.if (_addEditButtonString != value) 045.{ 046._addEditButtonString = value; 047.NotifyChanged("AddEditButtonString"); 048.} 049.} 050.} 051. 052.public SubscriptionToken addJobToken = new SubscriptionToken(); 053.public SubscriptionToken editJobToken = new SubscriptionToken(); 054. 055.public DelegateCommand<object> AddEditCommand { get; set; } 056.public DelegateCommand<object> CancelCommand { get; set; } 057. 058.private ObservableCollection<ValidationError> _errors = new ObservableCollection<ValidationError>(); 059.public ObservableCollection<ValidationError> Errors 060.{ 061.get { return _errors; } 062.} 063. 064.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 065.public ObservableCollection<ValidationResult> ValResults 066.{ 067.get { return this._valResults; } 068.} 069. 070.public AddEditJobViewModel(IEventAggregator eventAgg, IRegionManager regionmanager) 071.{ 072.// set Unity items 073.this.eventAggregator = eventAgg; 074.this.regionManager = regionmanager; 075. 076.context = new RecruitingContext(); 077. 078.AddEditCommand = new DelegateCommand<object>(this.AddEditJobCommand); 079.CancelCommand = new DelegateCommand<object>(this.CancelAddEditCommand); 080. 081.SetSubscriptions(); 082.} 083. 084.#region Subscription Declaration and Events 085. 086.public void SetSubscriptions() 087.{ 088.AddJobEvent addJob = this.eventAggregator.GetEvent<AddJobEvent>(); 089. 090.if (addJobToken != null) 091.addJob.Unsubscribe(addJobToken); 092. 093.addJobToken = addJob.Subscribe(this.AddJobEventHandler); 094. 095.EditJobEvent editJob = this.eventAggregator.GetEvent<EditJobEvent>(); 096. 097.if (editJobToken != null) 098.editJob.Unsubscribe(editJobToken); 099. 100.editJobToken = editJob.Subscribe(this.EditJobEventHandler); 101.} 102. 103.public void AddJobEventHandler(bool isNew) 104.{ 105.this.activeJob = null; 106.this.activeJob = new JobPosting(); 107.this.activeJob.IsActive = true; // We assume that we want a new job to go up immediately 108.this.isNewJob = true; 109.this.AddEditString = "Add New Job Posting"; 110.this.AddEditButtonString = "Add Job"; 111. 112.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 113.} 114. 115.public void EditJobEventHandler(JobPosting editJob) 116.{ 117.this.activeJob = null; 118.this.activeJob = editJob; 119.this.isNewJob = false; 120.this.AddEditString = "Edit Job Posting"; 121.this.AddEditButtonString = "Edit Job"; 122. 123.MakeMeActive(this.regionManager, "MainRegion", "AddEditJobView"); 124.} 125. 126.#endregion 127. 128.#region DelegateCommands from View 129. 130.public void AddEditJobCommand(object obj) 131.{ 132.if (this.Errors.Count > 0) 133.{ 134.List<string> errorMessages = new List<string>(); 135. 136.foreach (var valR in this.Errors) 137.{ 138.errorMessages.Add(valR.Exception.Message); 139.} 140. 141.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 142. 143.} 144.else if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 145.{ 146.List<string> errorMessages = new List<string>(); 147. 148.foreach (var valR in this._valResults) 149.{ 150.errorMessages.Add(valR.ErrorMessage); 151.} 152. 153.this._valResults.Clear(); 154. 155.this.eventAggregator.GetEvent<DisplayValidationErrorsEvent>().Publish(errorMessages); 156.} 157.else 158.{ 159.if (this.isNewJob) 160.{ 161.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(this.activeJob); 162.} 163.else 164.{ 165.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(true); 166.} 167.} 168.} 169. 170.public void CancelAddEditCommand(object obj) 171.{ 172.if (this.isNewJob) 173.{ 174.this.eventAggregator.GetEvent<AddJobCompleteEvent>().Publish(null); 175.} 176.else 177.{ 178.this.eventAggregator.GetEvent<EditJobCompleteEvent>().Publish(false); 179.} 180.} 181. 182.#endregion 183.} 184.} We start seeing something new on line 103- the AddJobEventHandler will create a new job and set that to the activeJob item on the ViewModel. When this is all set, the view calls that familiar MakeMeActive method to activate itself. I made a bit of a management call on making views self-activate like this, but I figured it works for one reason. As I create this application, views may not exist that I have in mind, so after a view receives its 'ping' from being subscribed to an event, it prepares whatever it needs to do and then goes active. This way if I don't have 'edit' hooked up, I can click as the day is long on the main view and won't get lost in an empty region. Total personal preference here. :) Everything else should again be pretty straightforward, although I do a bit of validation checking in the AddEditJobCommand, which can either fire off an event back to the main view/viewmodel if everything is a success or sent a list of errors to our notification module, which pops open a RadWindow with the alerts if any exist. As a bonus side note, here's what my WCF RIA Services metadata looks like for handling all of the validation: private JobPostingMetadata() { } [StringLength(2500, ErrorMessage = "Description should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage = "Description is required.")] public string Description; [Required(ErrorMessage="Active Status is Required")] public bool IsActive; [StringLength(100, ErrorMessage = "Posting title must be more than 3 but less than 100 characters.", MinimumLength = 3)] [Required(ErrorMessage = "Job Title is required.")] public bool JobTitle; [Required] public string Location; public bool NeedsCV; public bool NeedsOverview; public bool NeedsResume; public int PostingID; [Required(ErrorMessage="Qualifications are required.")] [StringLength(2500, ErrorMessage="Qualifications should be more than one and less than 2500 characters.", MinimumLength=1)] public string Qualifications; [StringLength(2500, ErrorMessage = "Requirements should be more than one and less than 2500 characters.", MinimumLength = 1)] [Required(ErrorMessage="Requirements are required.")] public string Requirements;   The RecruitCB Alternative See all that Xaml I pasted above? Those are now two pieces sitting in the JobsView.xaml file now. The only real difference is that the xEditGrid now sits in the same place as xJobsGrid, with visibility swapping out between the two for a quick switch. I also took out all the cal: and command: command references and replaced Button events with clicks and the Grid selection command replaced with a SelectedItemChanged event. Also, at the bottom of the xEditGrid after the last button, I add a ValidationSummary (with Visibility=Collapsed) to catch any errors that are popping up. Simple as can be, and leads to this being the single code-behind file: 001.public partial class JobsView : UserControl 002.{ 003.public RecruitingContext context; 004.public JobPosting activeJob; 005.public bool isNew; 006.private ObservableCollection<ValidationResult> _valResults = new ObservableCollection<ValidationResult>(); 007.public ObservableCollection<ValidationResult> ValResults 008.{ 009.get { return this._valResults; } 010.} 011.public JobsView() 012.{ 013.InitializeComponent(); 014.this.Loaded += new RoutedEventHandler(JobsView_Loaded); 015.} 016.void JobsView_Loaded(object sender, RoutedEventArgs e) 017.{ 018.context = new RecruitingContext(); 019.xJobsGrid.ItemsSource = context.JobPostings; 020.context.Load(context.GetJobPostingsQuery()); 021.} 022.private void xAddRecordButton_Click(object sender, RoutedEventArgs e) 023.{ 024.activeJob = new JobPosting(); 025.isNew = true; 026.xAddEditTitle.Text = "Add a Job Posting"; 027.xAddEditButton.Content = "Add"; 028.xEditGrid.DataContext = activeJob; 029.HideJobsGrid(); 030.} 031.private void xEditRecordButton_Click(object sender, RoutedEventArgs e) 032.{ 033.activeJob = xJobsGrid.SelectedItem as JobPosting; 034.isNew = false; 035.xAddEditTitle.Text = "Edit a Job Posting"; 036.xAddEditButton.Content = "Edit"; 037.xEditGrid.DataContext = activeJob; 038.HideJobsGrid(); 039.} 040.private void xAddEditButton_Click(object sender, RoutedEventArgs e) 041.{ 042.if (!Validator.TryValidateObject(this.activeJob, new ValidationContext(this.activeJob, null, null), _valResults, true)) 043.{ 044.List<string> errorMessages = new List<string>(); 045.foreach (var valR in this._valResults) 046.{ 047.errorMessages.Add(valR.ErrorMessage); 048.} 049.this._valResults.Clear(); 050.ShowErrors(errorMessages); 051.} 052.else if (xSummary.Errors.Count > 0) 053.{ 054.List<string> errorMessages = new List<string>(); 055.foreach (var err in xSummary.Errors) 056.{ 057.errorMessages.Add(err.Message); 058.} 059.ShowErrors(errorMessages); 060.} 061.else 062.{ 063.if (this.isNew) 064.{ 065.context.JobPostings.Add(activeJob); 066.context.SubmitChanges((s) => 067.{ 068.ActionHistory thisAction = new ActionHistory(); 069.thisAction.PostingID = activeJob.PostingID; 070.thisAction.Description = String.Format("Job '{0}' has been edited by {1}", activeJob.JobTitle, "default user"); 071.thisAction.TimeStamp = DateTime.Now; 072.context.ActionHistories.Add(thisAction); 073.context.SubmitChanges(); 074.}, null); 075.} 076.else 077.{ 078.context.SubmitChanges((s) => 079.{ 080.ActionHistory thisAction = new ActionHistory(); 081.thisAction.PostingID = activeJob.PostingID; 082.thisAction.Description = String.Format("Job '{0}' has been added by {1}", activeJob.JobTitle, "default user"); 083.thisAction.TimeStamp = DateTime.Now; 084.context.ActionHistories.Add(thisAction); 085.context.SubmitChanges(); 086.}, null); 087.} 088.ShowJobsGrid(); 089.} 090.} 091.private void xCancelButton_Click(object sender, RoutedEventArgs e) 092.{ 093.ShowJobsGrid(); 094.} 095.private void ShowJobsGrid() 096.{ 097.xAddEditRecordButtonPanel.Visibility = Visibility.Visible; 098.xEditGrid.Visibility = Visibility.Collapsed; 099.xJobsGrid.Visibility = Visibility.Visible; 100.} 101.private void HideJobsGrid() 102.{ 103.xAddEditRecordButtonPanel.Visibility = Visibility.Collapsed; 104.xJobsGrid.Visibility = Visibility.Collapsed; 105.xEditGrid.Visibility = Visibility.Visible; 106.} 107.private void ShowErrors(List<string> errorList) 108.{ 109.string nm = "Errors received: \n"; 110.foreach (string anerror in errorList) 111.nm += anerror + "\n"; 112.RadWindow.Alert(nm); 113.} 114.} The first 39 lines should be pretty familiar, not doing anything too unorthodox to get this up and running. Once we hit the xAddEditButton_Click on line 40, we're still doing pretty much the same things except instead of checking the ValidationHelper errors, we both run a check on the current activeJob object as well as check the ValidationSummary errors list. Once that is set, we again use the callback of context.SubmitChanges (lines 68 and 78) to create an ActionHistory which we will use to track these items down the line. That's all? Essentially... yes. If you look back through this post, most of the code and adventures we have taken were just to get things working in the MVVM/Prism setup. Since I have the whole 'module' self-contained in a single JobView+code-behind setup, I don't have to worry about things like sending events off into space for someone to pick up, communicating through an Infrastructure project, or even re-inventing events to be used with attached behaviors. Everything just kinda works, and again with much less code. Here's a picture of the MVVM and Code-behind versions on the Jobs and AddEdit views, but since the functionality is the same in both apps you still cannot tell them apart (for two-strike): Looking ahead, the Applicants module is effectively the same thing as the Jobs module, so most of the code is being cut-and-pasted back and forth with minor tweaks here and there. So that one is being taken care of by me behind the scenes. Next time, we get into a new world of fun- the interview scheduling module, which will pull from available jobs and applicants for each interview being scheduled, tying everything together with RadScheduler to the rescue. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • SPARC T5-4 LDoms for RAC and WebLogic Clusters

    - by Jeff Taylor-Oracle
    I wanted to use two Oracle SPARC T5-4 servers to simultaneously host both Oracle RAC and a WebLogic Server Cluster. I chose to use Oracle VM Server for SPARC to create a cluster like this: There are plenty of trade offs and decisions that need to be made, for example: Rather than configuring the system by hand, you might want to use an Oracle SuperCluster T5-8 My configuration is similar to jsavit's: Availability Best Practices - Example configuring a T5-8 but I chose to ignore some of the advice. Maybe I should have included an  alternate service domain, but I decided that I already had enough redundancy Both Oracle SPARC T5-4 servers were to be configured like this: Cntl 0.25  4  64GB                     App LDom                    2.75 CPU's                                        44 cores                                          704 GB              DB LDom      One CPU         16 cores         256 GB   The systems started with everything in the primary domain: # ldm list NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  NORM  UPTIME primary          active     -n-c--  UART    512   1023G    0.0%  0.0%  11m # ldm list-spconfig factory-default [current] primary # ldm list -o core,memory,physio NAME              primary           CORE     CID    CPUSET     0      (0, 1, 2, 3, 4, 5, 6, 7)     1      (8, 9, 10, 11, 12, 13, 14, 15)     2      (16, 17, 18, 19, 20, 21, 22, 23) -- SNIP     62     (496, 497, 498, 499, 500, 501, 502, 503)     63     (504, 505, 506, 507, 508, 509, 510, 511) MEMORY     RA               PA               SIZE                 0x30000000       0x30000000       255G     0x80000000000    0x80000000000    256G     0x100000000000   0x100000000000   256G     0x180000000000   0x180000000000   256G # Give this memory block to the DB LDom IO     DEVICE                           PSEUDONYM        OPTIONS     pci@300                          pci_0                pci@340                          pci_1                pci@380                          pci_2                pci@3c0                          pci_3                pci@400                          pci_4                pci@440                          pci_5                pci@480                          pci_6                pci@4c0                          pci_7                pci@300/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE1     pci@300/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE2     pci@300/pci@1/pci@0/pci@4/pci@0/pci@c /SYS/MB/SASHBA0     pci@300/pci@1/pci@0/pci@4/pci@0/pci@8 /SYS/RIO/NET0        pci@340/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE3     pci@340/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE4     pci@380/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE9     pci@380/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE10     pci@3c0/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE11     pci@3c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE12     pci@400/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE5     pci@400/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE6     pci@440/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE7     pci@440/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE8     pci@480/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE13     pci@480/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE14     pci@4c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE15     pci@4c0/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE16     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c /SYS/MB/SASHBA1     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4 /SYS/RIO/NET2    Added an additional service processor configuration: # ldm add-spconfig split # ldm list-spconfig factory-default primary split [current] And removed many of the resources from the primary domain: # ldm start-reconf primary # ldm set-core 4 primary # ldm set-memory 32G primary # ldm rm-io pci@340 primary # ldm rm-io pci@380 primary # ldm rm-io pci@3c0 primary # ldm rm-io pci@400 primary # ldm rm-io pci@440 primary # ldm rm-io pci@480 primary # ldm rm-io pci@4c0 primary # init 6 Needed to add resources to the guest domains: # ldm add-domain db # ldm set-core cid=`seq -s"," 48 63` db # ldm add-memory mblock=0x180000000000:256G db # ldm add-io pci@480 db # ldm add-io pci@4c0 db # ldm add-domain app # ldm set-core 44 app # ldm set-memory 704G  app # ldm add-io pci@340 app # ldm add-io pci@380 app # ldm add-io pci@3c0 app # ldm add-io pci@400 app # ldm add-io pci@440 app Needed to set up services: # ldm add-vds primary-vds0 primary # ldm add-vcc port-range=5000-5100 primary-vcc0 primary Needed to add a virtual network port for the WebLogic application domain: # ipadm NAME              CLASS/TYPE STATE        UNDER      ADDR lo0               loopback   ok           --         --    lo0/v4         static     ok           --         ...    lo0/v6         static     ok           --         ... net0              ip         ok           --         ...    net0/v4        static     ok           --         xxx.xxx.xxx.xxx/24    net0/v6        addrconf   ok           --         ....    net0/v6        addrconf   ok           --         ... net8              ip         ok           --         --    net8/v4        static     ok           --         ... # dladm show-phys LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE net1              Ethernet             unknown    0      unknown   ixgbe1 net0              Ethernet             up         1000   full      ixgbe0 net8              Ethernet             up         10     full      usbecm2 # ldm add-vsw net-dev=net0 primary-vsw0 primary # ldm add-vnet vnet1 primary-vsw0 app Needed to add a virtual disk to the WebLogic application domain: # format Searching for disks...done AVAILABLE DISK SELECTIONS:        0. c0t5000CCA02505F874d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02505f874           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD0/disk        1. c0t5000CCA02506C468d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02506c468           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD1/disk        2. c0t5000CCA025067E5Cd0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca025067e5c           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD2/disk        3. c0t5000CCA02506C258d0 <HITACHI-H106060SDSUN600G-A2B0-558.91GB>           /scsi_vhci/disk@g5000cca02506c258           /dev/chassis/SPARC_T5-4.AK00084038/SYS/SASBP0/HDD3/disk Specify disk (enter its number): ^C # ldm add-vdsdev /dev/dsk/c0t5000CCA02506C468d0s2 HDD1@primary-vds0 # ldm add-vdisk HDD1 HDD1@primary-vds0 app Add some additional spice to the pot: # ldm set-variable auto-boot\\?=false db # ldm set-variable auto-boot\\?=false app # ldm set-var boot-device=HDD1 app Bind the logical domains: # ldm bind db # ldm bind app At the end of the process, the system is set up like this: # ldm list -o core,memory,physio NAME             primary          CORE     CID    CPUSET     0      (0, 1, 2, 3, 4, 5, 6, 7)     1      (8, 9, 10, 11, 12, 13, 14, 15)     2      (16, 17, 18, 19, 20, 21, 22, 23)     3      (24, 25, 26, 27, 28, 29, 30, 31) MEMORY     RA               PA               SIZE                0x30000000       0x30000000       32G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@300                          pci_0               pci@300/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE1     pci@300/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE2     pci@300/pci@1/pci@0/pci@4/pci@0/pci@c /SYS/MB/SASHBA0     pci@300/pci@1/pci@0/pci@4/pci@0/pci@8 /SYS/RIO/NET0   ------------------------------------------------------------------------------ NAME             app              CORE     CID    CPUSET     4      (32, 33, 34, 35, 36, 37, 38, 39)     5      (40, 41, 42, 43, 44, 45, 46, 47)     6      (48, 49, 50, 51, 52, 53, 54, 55)     7      (56, 57, 58, 59, 60, 61, 62, 63)     8      (64, 65, 66, 67, 68, 69, 70, 71)     9      (72, 73, 74, 75, 76, 77, 78, 79)     10     (80, 81, 82, 83, 84, 85, 86, 87)     11     (88, 89, 90, 91, 92, 93, 94, 95)     12     (96, 97, 98, 99, 100, 101, 102, 103)     13     (104, 105, 106, 107, 108, 109, 110, 111)     14     (112, 113, 114, 115, 116, 117, 118, 119)     15     (120, 121, 122, 123, 124, 125, 126, 127)     16     (128, 129, 130, 131, 132, 133, 134, 135)     17     (136, 137, 138, 139, 140, 141, 142, 143)     18     (144, 145, 146, 147, 148, 149, 150, 151)     19     (152, 153, 154, 155, 156, 157, 158, 159)     20     (160, 161, 162, 163, 164, 165, 166, 167)     21     (168, 169, 170, 171, 172, 173, 174, 175)     22     (176, 177, 178, 179, 180, 181, 182, 183)     23     (184, 185, 186, 187, 188, 189, 190, 191)     24     (192, 193, 194, 195, 196, 197, 198, 199)     25     (200, 201, 202, 203, 204, 205, 206, 207)     26     (208, 209, 210, 211, 212, 213, 214, 215)     27     (216, 217, 218, 219, 220, 221, 222, 223)     28     (224, 225, 226, 227, 228, 229, 230, 231)     29     (232, 233, 234, 235, 236, 237, 238, 239)     30     (240, 241, 242, 243, 244, 245, 246, 247)     31     (248, 249, 250, 251, 252, 253, 254, 255)     32     (256, 257, 258, 259, 260, 261, 262, 263)     33     (264, 265, 266, 267, 268, 269, 270, 271)     34     (272, 273, 274, 275, 276, 277, 278, 279)     35     (280, 281, 282, 283, 284, 285, 286, 287)     36     (288, 289, 290, 291, 292, 293, 294, 295)     37     (296, 297, 298, 299, 300, 301, 302, 303)     38     (304, 305, 306, 307, 308, 309, 310, 311)     39     (312, 313, 314, 315, 316, 317, 318, 319)     40     (320, 321, 322, 323, 324, 325, 326, 327)     41     (328, 329, 330, 331, 332, 333, 334, 335)     42     (336, 337, 338, 339, 340, 341, 342, 343)     43     (344, 345, 346, 347, 348, 349, 350, 351)     44     (352, 353, 354, 355, 356, 357, 358, 359)     45     (360, 361, 362, 363, 364, 365, 366, 367)     46     (368, 369, 370, 371, 372, 373, 374, 375)     47     (376, 377, 378, 379, 380, 381, 382, 383) MEMORY     RA               PA               SIZE                0x30000000       0x830000000      192G     0x4000000000     0x80000000000    256G     0x8080000000     0x100000000000   256G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@340                          pci_1               pci@380                          pci_2               pci@3c0                          pci_3               pci@400                          pci_4               pci@440                          pci_5               pci@340/pci@1/pci@0/pci@6        /SYS/RCSA/PCIE3     pci@340/pci@1/pci@0/pci@c        /SYS/RCSA/PCIE4     pci@380/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE9     pci@380/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE10     pci@3c0/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE11     pci@3c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE12     pci@400/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE5     pci@400/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE6     pci@440/pci@1/pci@0/pci@e        /SYS/RCSA/PCIE7     pci@440/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE8 ------------------------------------------------------------------------------ NAME             db               CORE     CID    CPUSET     48     (384, 385, 386, 387, 388, 389, 390, 391)     49     (392, 393, 394, 395, 396, 397, 398, 399)     50     (400, 401, 402, 403, 404, 405, 406, 407)     51     (408, 409, 410, 411, 412, 413, 414, 415)     52     (416, 417, 418, 419, 420, 421, 422, 423)     53     (424, 425, 426, 427, 428, 429, 430, 431)     54     (432, 433, 434, 435, 436, 437, 438, 439)     55     (440, 441, 442, 443, 444, 445, 446, 447)     56     (448, 449, 450, 451, 452, 453, 454, 455)     57     (456, 457, 458, 459, 460, 461, 462, 463)     58     (464, 465, 466, 467, 468, 469, 470, 471)     59     (472, 473, 474, 475, 476, 477, 478, 479)     60     (480, 481, 482, 483, 484, 485, 486, 487)     61     (488, 489, 490, 491, 492, 493, 494, 495)     62     (496, 497, 498, 499, 500, 501, 502, 503)     63     (504, 505, 506, 507, 508, 509, 510, 511) MEMORY     RA               PA               SIZE                0x80000000       0x180000000000   256G IO     DEVICE                           PSEUDONYM        OPTIONS     pci@480                          pci_6               pci@4c0                          pci_7               pci@480/pci@1/pci@0/pci@a        /SYS/RCSA/PCIE13     pci@480/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE14     pci@4c0/pci@1/pci@0/pci@8        /SYS/RCSA/PCIE15     pci@4c0/pci@1/pci@0/pci@4        /SYS/RCSA/PCIE16     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c /SYS/MB/SASHBA1     pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4 /SYS/RIO/NET2   Start the domains: # ldm start app LDom app started # ldm start db LDom db started Make sure to start the vntsd service that was created, above. # svcs -a | grep ldo disabled        8:38:38 svc:/ldoms/vntsd:default online          8:38:58 svc:/ldoms/agents:default online          8:39:25 svc:/ldoms/ldmd:default # svcadm enable vntsd Now use the MAC address to configure the Solaris 11 Automated Installation. Database Logical Domain # telnet localhost 5000 {0} ok devalias screen                   /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@7/display@0 disk7                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p3 disk6                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p2 disk5                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p1 disk4                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0/disk@p0 scsi1                    /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@c/scsi@0 net3                     /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0,1 net2                     /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0 virtual-console          /virtual-devices/console@1 name                     aliases {0} ok boot net2 Boot device: /pci@4c0/pci@1/pci@0/pci@c/pci@0/pci@4/network@0  File and args: 1000 Mbps full duplex Link up Requesting Internet Address for xx:xx:xx:xx:xx:xx Requesting Internet Address for xx:xx:xx:xx:xx:xx WLS Logical Domain # telnet localhost 5001 {0} ok devalias hdd1                     /virtual-devices@100/channel-devices@200/disk@0 vnet1                    /virtual-devices@100/channel-devices@200/network@0 net                      /virtual-devices@100/channel-devices@200/network@0 disk                     /virtual-devices@100/channel-devices@200/disk@0 virtual-console          /virtual-devices/console@1 name                     aliases {0} ok boot net Boot device: /virtual-devices@100/channel-devices@200/network@0  File and args: Requesting Internet Address for xx:xx:xx:xx:xx:xx Requesting Internet Address for xx:xx:xx:xx:xx:xx Repeat the process for the second SPARC T5-4, install Solaris, RAC and WebLogic Cluster, and you are ready to go. Maybe buying a SuperCluster would have been easier.

    Read the article

  • SQL Server giving a weird conversion error

    - by codingguy3000
    Hello Everyone I'm stuck and any help you can give me is greatly appreciated. create table stackoverflow_rules(myguid uniqueidentifier primary key, myvarchar50 varchar(50)) insert into stackoverflow_rules(myguid, myvarchar50) values('27C4CF31-2C4C-4C78-BBDC-2D0FDAA891CF','9985276') insert into stackoverflow_rules(myguid, myvarchar50) values('6F865BBD-1D79-4931-BCFE-71AD7A14B145','9985275') insert into stackoverflow_rules(myguid, myvarchar50) values('C91124D9-CE83-44C6-A979-427858BBCDCE','9985274') insert into stackoverflow_rules(myguid, myvarchar50) values('18D89F3C-D15D-4A27-9695-CE4417A9D752','9985273') insert into stackoverflow_rules(myguid, myvarchar50) values('40C9A127-D50D-440C-A6BF-A3C292B56121','9985272') insert into stackoverflow_rules(myguid, myvarchar50) values('3191CE74-6443-4DF0-ABFB-4083150E27A7','9985271') insert into stackoverflow_rules(myguid, myvarchar50) values('489606B3-8EE8-4308-BD3B-660FEC999B84','9985270') insert into stackoverflow_rules(myguid, myvarchar50) values('7FB986D6-7ACF-4453-B124-E688514D3A84','9985269') insert into stackoverflow_rules(myguid, myvarchar50) values('2E1662CB-FBC2-418A-9FFD-453895EE6FA4','9985268') insert into stackoverflow_rules(myguid, myvarchar50) values('6180E683-AA19-4B5D-9AA1-370B9AA8C156','9985267') insert into stackoverflow_rules(myguid, myvarchar50) values('39BDD429-4C49-4351-951F-016B89E700D0','9985267') insert into stackoverflow_rules(myguid, myvarchar50) values('9A09CF26-B168-48D2-9178-EBBD6C0BA5F4','9985267') insert into stackoverflow_rules(myguid, myvarchar50) values('56BA06A7-71F6-4AC2-817A-69A3E800BE54','9985266') insert into stackoverflow_rules(myguid, myvarchar50) values('35D8C2FE-4793-40BC-AECA-10AA722866AE','9985265') insert into stackoverflow_rules(myguid, myvarchar50) values('84162588-D2A2-4B67-869D-2D7A0CB3ABEC','9985264') insert into stackoverflow_rules(myguid, myvarchar50) values('05A8BE08-B0CF-4ADC-A901-2DB6B70713DA','9985263') insert into stackoverflow_rules(myguid, myvarchar50) values('11E1B3F5-5EC0-43BF-B868-B30BCC5F97B3','9985262') insert into stackoverflow_rules(myguid, myvarchar50) values('D48875E9-4A2B-4A5E-8C3A-6788ADD2E44E','9985261') insert into stackoverflow_rules(myguid, myvarchar50) values('5C29D799-5F86-4B5D-8B31-1AFB9E289417','9985260') insert into stackoverflow_rules(myguid, myvarchar50) values('3FAF4D60-F06A-4754-A26F-61DE6A121E9E','9985259') insert into stackoverflow_rules(myguid, myvarchar50) values('4F001BF6-BF60-4F40-AAE1-32CD707E87F8','9985258') insert into stackoverflow_rules(myguid, myvarchar50) values('56A91F39-F9D2-438C-A424-F26ED799F723','9985258') insert into stackoverflow_rules(myguid, myvarchar50) values('F55F72CA-0C2B-4DE7-B725-C9521CD57B23','9985257') insert into stackoverflow_rules(myguid, myvarchar50) values('364808A7-46E6-4639-A14D-6A350A56D2A0','9985256') insert into stackoverflow_rules(myguid, myvarchar50) values('68FA5B18-BBE3-4F1F-A9DE-D46853AD5D4A','9985255') insert into stackoverflow_rules(myguid, myvarchar50) values('B0118D37-807A-4D29-9B56-790F3D810C64','9985254') insert into stackoverflow_rules(myguid, myvarchar50) values('E998F33E-F05A-4C49-8CC2-B90BCFA9AE0E','9985253') insert into stackoverflow_rules(myguid, myvarchar50) values('A0531477-335C-4A7D-A1E7-1DAD54ECB7AD','9985252') insert into stackoverflow_rules(myguid, myvarchar50) values('96540D09-BA49-413B-9FD6-228DF524BE1A','9985251') insert into stackoverflow_rules(myguid, myvarchar50) values('23CD3C18-DAE2-463B-B27C-977488DF9C5F','9985251') insert into stackoverflow_rules(myguid, myvarchar50) values('8BF4AE7D-0AF0-47F9-9388-A2D4CA4C3160','9985250') insert into stackoverflow_rules(myguid, myvarchar50) values('E1892F4D-471C-4A49-8D68-F9F1E6E9C275','9985249') insert into stackoverflow_rules(myguid, myvarchar50) values('641A62CC-1DEE-4DFD-BC9A-DD47D7C45B18','9985248') insert into stackoverflow_rules(myguid, myvarchar50) values('3AF2F7CA-489D-4A79-A6F5-DB5578F381D0','9985247') insert into stackoverflow_rules(myguid, myvarchar50) values('939B3773-BE13-483C-A27F-5594A23AB6F2','9985247') insert into stackoverflow_rules(myguid, myvarchar50) values('81A5FD90-1E2D-4DB5-A10F-5624A576D566','9985247') insert into stackoverflow_rules(myguid, myvarchar50) values('E87109DD-7283-4B60-AB7F-F9A3DD384E52','9985247') insert into stackoverflow_rules(myguid, myvarchar50) values('689A789F-0FFC-45AE-87DF-66C5130338E2','9985246') insert into stackoverflow_rules(myguid, myvarchar50) values('4A9D3A2D-940B-4D45-8234-A1C98FF8A2FB','9985246') insert into stackoverflow_rules(myguid, myvarchar50) values('75073565-E623-40FC-AEF3-81620F2514A8','9985245') insert into stackoverflow_rules(myguid, myvarchar50) values('DB583FF8-1635-47C1-8241-D37C015C7642','9985244') insert into stackoverflow_rules(myguid, myvarchar50) values('39EA148B-55D1-4878-925A-39FA8592F451','9985243') insert into stackoverflow_rules(myguid, myvarchar50) values('BF1CE2D7-ABD3-460B-A7DC-BD0E2B2A5388','9985242') insert into stackoverflow_rules(myguid, myvarchar50) values('B6431717-26F0-436E-9DCC-C0C5240AC329','9985242') insert into stackoverflow_rules(myguid, myvarchar50) values('4F22E672-6F3D-454C-ABA7-D9B84D12DDE0','9985241') insert into stackoverflow_rules(myguid, myvarchar50) values('0436E893-DC43-4FF7-8BDC-BD0BF9E9A55D','9985240') insert into stackoverflow_rules(myguid, myvarchar50) values('60B2FE73-3575-4047-B324-63620FEACD6B','9985239') insert into stackoverflow_rules(myguid, myvarchar50) values('2041E1E5-F60F-4494-A000-F349F49662EC','9985238') insert into stackoverflow_rules(myguid, myvarchar50) values('B89636C8-4648-4058-8DC6-95DCE468CA63','9985237') insert into stackoverflow_rules(myguid, myvarchar50) values('4EC1B486-1E9C-4B41-94C1-5B24471BAD3D','9985236') insert into stackoverflow_rules(myguid, myvarchar50) values('4C86120E-1A27-4F59-948B-F11D8ACD498E','9985234') insert into stackoverflow_rules(myguid, myvarchar50) values('E8A1EA7A-5337-4769-9D23-25F7BFB589AF','9985217') insert into stackoverflow_rules(myguid, myvarchar50) values('6E7982F0-5899-4214-A05A-262E05A540CB','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('C55B838F-FD63-40E9-97AF-25E02A37ABB7','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('95296596-ED86-4A88-8C46-27CF79D4AFB9','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('149BC6CC-857C-4CD7-B374-29EE6382CFCF','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('5D3E88FC-1DB5-4BAF-A16B-29F2A2C7D997','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1FDB6AD4-3860-411E-A247-22B9D00C9053','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('83BD156B-C5ED-460E-95F0-21E8B4254BF8','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2FD09C37-E224-414D-8C41-220B6528EB9C','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('A46D0B0D-70E2-4AEF-BF30-2244FFA8EF9E','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('824B7F04-51B4-48F6-920A-1FDE8571E32F','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('79DD6034-A9DC-4AC1-9CD3-338F0521AC99','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('BFD35E07-C5DE-4C8B-ADC4-36069655F450','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('D655AD53-8107-481B-A1C9-340A7B31EFB6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('7E6FF0E9-E1F4-4522-AB91-1A64C2AC0E3A','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('3A977BFE-17F6-46FA-8568-1A8ED2F48483','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('D95A941D-DEB3-46B5-8B2B-1AC9741824ED','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('55528060-12AE-4C2E-A4A1-11E40881DEAE','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('E99B4179-DE6E-4FCB-B7B9-165C05A94424','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('20D2D92B-E45A-4883-A114-109C41E2F278','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('7161CC4A-0B3E-4B97-A973-0C5A7F26CC0D','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('5E267539-8412-4423-A82C-0C74C995D561','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('AE173244-38CD-4B8D-A1CB-0DC112AC6F54','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('3ED8BF74-D0D1-4D11-92B3-008F11E34308','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('6F77EAF9-0520-495A-ADB5-027F611E418D','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('34DAFFBC-0733-4EC0-8607-0287DA5929D6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('5266FB2F-2829-4C60-91E7-00D9A0832B8E','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('A1EC9933-92F0-4805-93C2-071F503BE816','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('EC19E064-940A-4EEA-9A12-07D2A0680C03','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('7CA5F400-0E57-4A86-B4E1-094720E98B56','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('3A7F95B9-79B6-4323-B390-5B30AE23F66C','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('CCA677CB-8889-40E6-8FDC-54C33DCBAD93','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('345FACAF-90B2-4B2D-B6CF-577F242F28C9','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('20531AFD-21EB-4B75-B50C-5FEABDAE29DB','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('A8FF5B5F-7976-43FE-B013-67CEE5F07710','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('AEF6E39A-6CC2-48E8-9999-65D7CD103A45','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('8AB565EE-4A53-40B9-9D95-66034FD72B6D','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('B0DAC1F6-B7E0-476F-8543-6282203A72C7','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('DACE56ED-5964-44FD-9E35-68E3B409B2D7','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('C64F5A8A-1930-4824-9F0E-68EF848F2F86','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('38817195-BDB0-44AC-988D-690BE9E50FD0','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('BF4202D8-A23A-48DB-8799-694578EED45A','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('D26A3E39-EEA2-4928-82F9-676B3F901021','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('0D3F16C2-237A-4461-9851-6B0555EDADCE','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('F8CCEE52-A31D-4B6D-9F9F-6D53BE7EB919','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('BCE3146A-AACE-4CF5-ADF1-3D5E57827D96','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('5D6E4347-ABC8-4892-89EC-3FE666A8523B','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('8BD465A9-DC91-4960-BCC7-42EAEE51024A','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('54FCE80F-F551-4548-BCE2-4499AB66D93F','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('843C8651-A95D-458F-A6E7-488F5978FB56','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('4BE7BC8D-BC97-4F8F-85BB-48CC970B9465','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('6C611A14-11CC-454D-A9C8-48CF0B2776A9','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('50819781-F028-4976-A406-45D88804C566','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1EE5DBE0-0EA3-4F9B-8C78-469D00888892','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('33B87A5D-CB69-4BD2-BEC8-4D90D6A21232','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('C31D7CD1-E9BA-4B03-BB11-4DE7022A45AD','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2E1FC057-4C57-4C27-86E4-4EC887B77ABE','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('7811CF7B-2935-47A6-92CB-520C4E0AEC4A','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2DAB5B2D-3D94-4F47-B7F5-536FAF08BCC6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2F405742-CF20-4995-84D3-976B108DBB99','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2852C9C8-325D-4C82-837E-9D6E751B794F','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('40E87A07-DA9B-4277-90BB-8FA994470CB1','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('732DF392-C8D6-4EEF-B046-8FC6C0DB4DEC','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('AA55681E-FE4A-46E9-8809-928941C165AD','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('C146FDD8-EF42-48B4-A357-90CEE93FE902','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('A0FEAAAD-8B44-4797-BD1F-A34AC872EC39','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('A45F22EE-8740-4A3B-ABB5-A8F7EE32B107','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1A073622-C5D6-41B0-BCC2-8220ED1978BA','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('C7CFCCDC-5ADF-4BCF-BBE4-7E6D611B96CC','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('A618A9A7-5DAC-4658-9B6F-7FC091C49122','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('0F698448-929F-4E3B-A6B1-810BF66DC9AD','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2FD04ED9-AC24-4E80-8902-7AF2351DAB7B','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2DA5D721-DFDD-4E96-9A5C-7DF7B0FA9ABB','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('76816CF3-FB2E-440D-91E7-7FF179CE2702','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('228A8BC4-D136-4FDA-B006-84FD69D583A0','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('838DCC6F-0C37-4144-9461-892F1DE2A0D4','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('E65DF83F-FDA5-4883-9E29-8CAB66297328','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('621547A3-613E-4CB7-9537-8D1FF987ADC7','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2ABB681F-5258-4DF3-A0B8-89962ADDBCB8','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('F54E5C88-17FA-407C-B457-8B69077748E9','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('63D66460-3834-4873-9BD4-74148EC300F4','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('14A19194-457F-40D3-B08E-715EF830FD75','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('75CF2565-D36A-46F6-935E-BFD82144B8A2','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('EDA93745-2009-41F6-B01F-C3F9930C0F67','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('20CFC7EE-7188-49F0-BDEB-C0CAF3610F2C','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('ED6EDD00-2151-4CA7-9F22-BF6DE74B0622','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('EC8DAC77-E516-4B8E-9FB8-C5A4C963563A','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('C6FDECC9-24BE-4AA0-B33C-C9195DC630B0','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('BD9890BA-8F8A-4596-B0F0-BA2F3467E5B4','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2F496F30-1E08-4174-ABE4-BBE3977268EF','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2CD7D3D2-77D4-43DE-A44F-B248AAF8891F','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('FFC7E6E7-00E9-41E8-BD11-B0EFD4BA3971','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('B8348F9C-D57F-4561-9981-B14DAEE7257B','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('2CAE1761-8DB0-4D18-8FF6-AD79D44EF699','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('9A241CB7-1FAE-4767-8E13-AF3A66123DC0','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('B836DB33-FB5A-4FF7-A293-D7A29488A6F6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('23207756-F6E1-406C-AEAC-DFC1710E3E41','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('64ED1587-8791-414F-B2EA-E265584BECE9','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('07442948-0FE7-4EDD-8779-E4808B20852C','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('ACAE3351-3EDF-43E3-8021-E4CBAF20BA55','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1E96680B-1E92-40F2-AAAE-E4D524206982','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('57A0F1D0-8029-4110-9C2A-D3A2F13E6776','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('D0A76745-1930-4755-90EA-D3CA0240BA6D','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('5379B540-4DCC-4A71-BE19-D1DA4B808A4D','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('B41B60EB-5C83-4CA6-8768-D2226A164FB6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('31CA2727-7227-4377-B127-D261AA0CD304','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('BD7102BB-FA67-4A33-82C4-D3616ED7CB3F','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('7090FCA6-144A-430B-A609-CDDFB39C4D25','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('382BE0D2-A92A-4D73-B2CE-D640A2BBA523','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1A4011C0-40C8-4ABD-8ACF-D6D3A220B940','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('53A62E1F-5926-4DEA-A7FB-C99B14A2120D','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('24C84EE0-70DF-4602-B133-F1CB765F2B29','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('B40C80C7-26C0-43F9-9B8A-F2C46A6FD79F','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('8F9FE478-6961-4042-A62D-F464F21BFC46','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('6E9B27D8-C963-4413-ABB5-F31F307F2AE1','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1CCAB652-042A-4C6F-B89B-ECBFFCA468C6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('FDA7C815-F4ED-4E6D-AE95-ED18005651EB','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('6D48A842-B5F9-45AA-BC3C-EF74C911E2FC','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('571A48F3-10E2-419C-8E72-EB4B833FA2A2','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('27C54188-4CD7-447D-9C47-E7C7F4A87A47','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('0F8E94BC-1612-4086-A6C1-E883C83758E4','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('24315A1A-CFD9-4984-AF64-F9A79E960D45','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('38602998-8149-4B6A-91EA-F9D4B93810A6','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('1FDB6A11-E422-4EA1-B4AC-FDD1197BB7F3','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('FDDCB1EA-37C9-4833-BDD8-FEFDEEF0A749','9980234') insert into stackoverflow_rules(myguid, myvarchar50) values('5241815C-CE10-4C08-BE01-CB2D1012CCF0','9980066') insert into stackoverflow_rules(myguid, myvarchar50) values('D0E5E79E-4502-42F8-B8C6-EDE3D20526B4','9970234') insert into stackoverflow_rules(myguid, myvarchar50) values('B173FC5D-BAB3-4942-A904-D9D3BA66A1ED','9960234') insert into stackoverflow_rules(myguid, myvarchar50) values('D5C2A2D9-2BA6-4059-896C-B464C8C8CB5F','9960234') insert into stackoverflow_rules(myguid, myvarchar50) values('32B865C7-1D67-457A-9550-DFDBCBFB12C6','9951166') insert into stackoverflow_rules(myguid, myvarchar50) values('82F0A99B-0C88-44EB-BE50-265C6C4C1B86','9400000') insert into stackoverflow_rules(myguid, myvarchar50) values('BDE9DC0D-B9A7-4AC9-83D5-8F9ED5F25FDA','9299199') insert into stackoverflow_rules(myguid, myvarchar50) values('2FE2415A-9D51-4AD4-9679-74BDA93DC6A6','9299166') insert into stackoverflow_rules(myguid, myvarchar50) values('4BC3D4FF-5FBB-484E-8BC6-CFE90706E3D2','9299111') insert into stackoverflow_rules(myguid, myvarchar50) values('0FC22F14-A499-4C8C-9E6B-0CF613ACF505','9281266') insert into stackoverflow_rules(myguid, myvarchar50) values('AC6B2795-A9A0-40DF-9BAF-04D4A74F4B9B','9281166') insert into stackoverflow_rules(myguid, myvarchar50) values('DAA73B60-65B9-46B2-B1AC-76A74B621700','9281166') insert into stackoverflow_rules(myguid, myvarchar50) values('D419DCBB-A76E-47DF-A59D-803AFAB770C5','9281166') insert into stackoverflow_rules(myguid, myvarchar50) values('405847E0-4764-4409-81E8-8ECCCAAE94BB','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('76D59559-F986-45EF-9F74-7870D97A377D','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('3F6F78A9-7930-4F76-839F-77304396CBC3','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('2C1A62F2-B783-432B-B83A-6BD8B29EE2DE','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('D371D319-6E88-4286-A46E-6C1905ADD6AC','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('10C42C2E-DC1A-43C4-959C-98D3A798D631','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('215F0003-188D-45C9-85BE-9B3811760CCB','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('4DD2BA43-BA1C-44BE-8C10-996454D63205','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('26D863E7-6F96-42BC-A2BA-99B30D94F6D9','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('625A2793-A60F-4FE7-9BD4-A953877B258D','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('5B5A2538-74E0-4A6F-9929-AA29BA3BDCCE','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('B8597353-0254-42AB-BAF7-AA4DAF195CC8','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('22F392BC-B42C-434F-9E32-AB8DFFC6EA76','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('E703EEE4-82B1-43C0-914F-ABCF3EF53E91','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('09BD6548-7395-4450-A7CA-D0AB0631F222','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('71D737EB-59CA-4685-827D-E17A0B4FA44D','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('F09ACB1E-64B0-4F29-86BF-E323C5347883','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('8A39E85B-8E49-44C1-8B4A-B9D79CC3F97F','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('E3BE436C-0BEC-45CE-9680-AFCE70D59B84','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('915D4F2A-8430-479F-84ED-064A3D6889DA','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('FF6DEFF5-072D-4E14-A6C2-0EF4862CCF28','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('2E7944D1-5A85-4D85-9660-138F30BED95C','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('E449E8A8-1CE4-49DE-898F-1C357777B674','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('65E89A21-5908-4913-840A-28E625F4C003','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('E23175FD-B60E-4FD4-A99A-2DB232BCB6B1','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('A521CC05-21C1-4759-AA00-384014F9C4CB','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('218CE896-8D3F-447B-A504-33428F797CE2','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('D4A3A407-20BF-481D-95DE-2C2BED13FD60','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('B5FCEB1B-3F0D-4DFE-B47D-4D44E88879A1','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('519BB489-1606-4A64-BA49-456DE79FC471','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('837D5167-CE68-4840-9592-432D371EE3AF','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('F140182F-844E-4CA7-BAA1-6A96FA726A93','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('8FB3AE45-3BFF-4DBF-ABAE-61A97EE73F36','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('33D59F0B-DAD6-4608-BF70-F2C49805FF54','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('5BFB5CEC-1322-49B0-A626-EC94092998A3','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('8AB2E1F8-A4F6-48AC-B789-FB1F46A89617','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('CD559FD0-552F-4F54-A638-F86878413D7B','9280666') insert into stackoverflow_rules(myguid, myvarchar50) values('D23AC171-E7E8-4310-B3B5-1253CCA33E5C','9251166') insert into stackoverflow_rules(myguid, myvarchar50) values('0E777743-0C70-4D76-9293-076F9DBC02EB','9251166') insert into stackoverflow_rules(myguid, myvarchar50) values('B0CDE199-9BDF-4CDD-8E32-1384CB8512B4','9200166') insert into stackoverflow_rules(myguid, myvarchar50) values('1F48F171-5179-4EC9-9554-2DA6EF60B9E8','9002266') insert into stackoverflow_rules(myguid, myvarchar50) values('A9168993-F6AF-4F81-A166-441411E72691','9001166') insert into stackoverflow_rules(myguid, myvarchar50) values('25FB4906-2AC8-4A29-B077-C4BC681D3227','9000001') insert into stackoverflow_rules(myguid, myvarchar50) values('02E14983-49E2-4867-B0C2-0BCF9BC3BAB6','8860235') insert into stackoverflow_rules(myguid, myvarchar50) values('53F915DE-1A8A-4A75-A661-0CAB56F39B11','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('834F1EB8-AEA0-435F-81AF-0C212BD54A17','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('797AFF3A-8CB0-4AE8-8430-0ED04A72394B','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('56B83693-3F46-4D8F-93A8-098517C96E94','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('1559018C-71F3-45FC-9642-09DFCC06EA78','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('04A86146-97FC-46C4-B1FE-07E916509908','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('1A3367B3-CB36-40CA-8D7D-02206840089A','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('33626BD1-AED2-4AEF-9289-199F641FDFE0','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('8468E795-71A8-4417-8179-1778FD7E915E','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('9EE6FF40-AAFB-46A8-8655-186515189AB8','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('D314A6A4-BBB5-4499-9EF4-1B37EA9131B6','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('788898AF-48E6-4DA0-BDBB-12871FE81D35','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('34D55FA5-FF82-49B5-A4EF-144999BB1B4F','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('C8FF93B1-EB7C-4711-85BA-14C78B7A27C1','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('54199346-624C-4B1E-8293-14EE9C6EF23B','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('5105C133-9120-4075-9EB6-151569E0719D','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('D03366DB-BC4A-44CC-ABC8-151F627E2A95','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('40EF76A3-2250-4840-90C1-1577AE855EEE','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('8E229744-7528-4727-880A-168331E72ED0','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('10F66C0C-C97B-4A8B-9FAC-160F3AA09A62','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('8173CB1C-A65D-4B89-9BD3-2DC4BA2F4C72','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('1CEAE246-6323-402D-95DB-2AC25DF1FD83','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('BB859D4A-3B1C-40FC-8C74-2BD44902894C','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('A31C45AF-D149-4789-A22D-2FB3E6A17627','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('52F98EEC-D3AC-429C-948F-306FA865EDE7','8860234') insert into stackoverflow_rules(myguid, myvarchar50) values('06E84032-C102-49F4-B544-3169FC1C62F4','8860234') insert into stackoverflo

    Read the article

< Previous Page | 683 684 685 686 687 688  | Next Page >