Search Results

Search found 85288 results on 3412 pages for 'new bie'.

Page 375/3412 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • JSON URL from StackExchange API returning jibberish?

    - by shsteimer
    I have a feeling I'm doing something wrong here, but I'm not quite sure I'f I'm missing a step, or am just having an encoding problem or something. Here's my code URL url = new URL("http://api.stackoverflow.com/0.8/questions/2886661"); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); // Question q = new Gson().fromJson(in, Question.class); String line; StringBuffer content = new StringBuffer(); while ((line = in.readLine()) != null) { content.append(line); } when I print content, I get a whole bunch of wingdings and special characters, basically jibberish. I would copy and past it here, but that isn't working. What am I doing wrong?

    Read the article

  • binary file to string

    - by andrew
    i'm trying to read a binary file (for example an executable) into a string, then write it back FileStream fs = new FileStream("C:\\tvin.exe", FileMode.Open); BinaryReader br = new BinaryReader(fs); byte[] bin = br.ReadBytes(Convert.ToInt32(fs.Length)); System.Text.Encoding enc = System.Text.Encoding.ASCII; string myString = enc.GetString(bin); fs.Close(); br.Close(); System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] rebin = encoding.GetBytes(myString); FileStream fs2 = new FileStream("C:\\tvout.exe", FileMode.Create); BinaryWriter bw = new BinaryWriter(fs2); bw.Write(rebin); fs2.Close(); bw.Close(); this does not work (the result has exactly the same size in bytes but can't run) if i do bw.Write(bin) the result is ok, but i must save it to a string

    Read the article

  • Help Understanding Function

    - by Fred F.
    What does the following function perform? public static double CleanAngle(double angle) { while (angle < 0) angle += 2 * System.Math.PI; while (angle > 2 * System.Math.PI) angle -= 2 * System.Math.PI; return angle; } This is how it is used with ATan2. I believe the actually values passed to ATan2 are always positive. static void Main(string[] args) { int q = 1; //'x- and y-coordinates will always be positive values //'therefore, do i need to "clean"? foreach (Point oPoint in new Point[] { new Point(8,20), new Point(-8,20), new Point(8,-20), new Point(-8,-20)}) { Debug.WriteLine(Math.Atan2(oPoint.Y, oPoint.X), "unclean " + q.ToString()); Debug.WriteLine(CleanAngle(Math.Atan2(oPoint.Y, oPoint.X)), "cleaned " + q.ToString()); q++; } //'output //'unclean 1: 1.19028994968253 //'cleaned 1: 1.19028994968253 //'unclean 2: 1.95130270390726 //'cleaned 2: 1.95130270390726 //'unclean 3: -1.19028994968253 //'cleaned 3: 5.09289535749705 //'unclean 4: -1.95130270390726 //'cleaned 4: 4.33188260327232 }

    Read the article

  • Simple Project Templates

    - by Geertjan
    The NetBeans sources include a module named "simple.project.templates": In the module sources, Tim Boudreau turns out to be the author of the code, so I asked him what it was all about, and if he could provide some usage code. His response, from approximately this time last year because it's been sitting in my inbox for a while, is below. Sure - though I think the javadoc in it is fairly complete.  I wrote it because I needed to create a bunch of project templates for Javacard, and all of the ways that is usually done were grotesque and complicated.  I figured we already have the ability to create files from templates, and we already have the ability to do substitutions in templates, so why not have a single file that defines the project as a list of file templates to create (with substitutions in the names) and some definitions of what should be in project properties. You can also add files to the project programmatically if you want.Basically, a template for an entire project is a .properties file.  Any line which doesn't have the prefix 'pp.' or 'pvp.' is treated as the definition of one file which should be created in the new project.  Any such line where the key ends in * means that file should be opened once the new project is created.  So, for example, in the nodejs module, the definition looks like: {{projectName}}.js*=Templates/javascript/HelloWorld.js .npmignore=node_hidden_templates/npmignore So, the first line means:  - Create a file with the same name as the project, using the HelloWorld template    - I.e. the left side of the line is the relative path of the file to create, and the right side is the path in the system filesystem for the template to use       - If the template is not one you normally want users to see, just register it in the system filesystem somewhere other than Templates/ (but remember to set the attribute that marks it as a template)  - Include that file in the set of files which should be opened in the editor once the new project is created. To actually create a project, first you just create a new ProjectCreator: ProjectCreator gen = new ProjectCreator( parentFolderOfNewProject ); Now, if you want to programmatically generate any files, in addition to those defined in the template, you can: gen.add (new FileCreator("nbproject", "project.xml", false) {     public DataObject create (FileObject project, Map<String,String> substitutions) throws IOException {          ...     } }); Then pass the FileObject for the project template (the properties file) to the ProjectCreator's createProject method (hmm, maybe it should be the string path to the project template instead, to save the caller trouble looking up the FileObject for the template).  That method looks like this: public final GeneratedProject createProject(final ProgressHandle handle, final String name, final FileObject template, final Map<String, String> substitutions) throws IOException { The name parameter should be the directory name for the new project;  the map is the strings you gathered in the wizard which should be used for substitutions.  createProject should be called on a background thread (i.e. use a ProgressInstantiatingIterator for the wizard iterator and just pass in the ProgressHandle you are given). The return value is a GeneratedProject object, which is just a holder for the created project directory and the set of DataObjects which should be opened when the wizard finishes. I'd love to see simple.project.templates moved out of the javacard cluster, as it is really useful and much simpler than any of the stuff currently done for generating projects.  It would also be possible to do much richer tools for creating projects in apisupport - i.e. choose (or create in the wizard) the templates you want to use, generate a skeleton wizard with a UI for all the properties you'd like to substitute, etc. Here is a partial project template from Javacard - for example usage, see org.netbeans.modules.javacard.wizard.ProjectWizardIterator in javacard.project (or the much simpler one in contrib/nodejs). #This properties file describes what to create when a project template is#instantiated.  The keys are paths on disk relative to the project root. #The values are paths to the templates to use for those files in the system#filesystem.  Any string inside {{ and }}'s will be substituted using properties#gathered in the template wizard.#Special key prefixes are #  pp. - indicates an entry for nbproject/project.properties#  pvp. - indicates an entry for nbproject/private/private.properties #File templates, in format [path-in-project=path-to-template]META-INF/javacard.xml=org-netbeans-modules-javacard/templates/javacard.xmlMETA-INF/MANIFEST.MF=org-netbeans-modules-javacard/templates/EAP_MANIFEST.MF APPLET-INF/applet.xml=org-netbeans-modules-javacard/templates/applet.xmlscripts/{{classnamelowercase}}.scr=org-netbeans-modules-javacard/templates/test.scrsrc/{{packagepath}}/{{classname}}.java*=Templates/javacard/ExtendedApplet.java nbproject/deployment.xml=org-netbeans-modules-javacard/templates/deployment.xml#project.properties contentspp.display.name={{projectname}}pp.platform.active={{activeplatform}} pp.active.device={{activedevice}}pp.includes=**pp.excludes= I will be using the above info in an upcoming blog entry and provide step by step instructions showing how to use them. However, anyone else out there should have enough info from the above to get started yourself!

    Read the article

  • Why We Should Learn to Stop Worrying and Love Millennials

    - by HCM-Oracle
    By Christine Mellon Much is said and written about the new generations of employees entering our workforce, as though they are a strange specimen, a mysterious life form to be “figured out,” accommodated and engaged – at a safe distance, of course.  At its worst, this talk takes a critical and disapproving tone, with baby boomer employees adamantly refusing to validate this new breed of worker, let alone determine how to help them succeed and achieve their potential.   The irony of our baby-boomer resentments and suspicions is that they belie the fact that we created the very vision that younger employees are striving to achieve.  From our frustrations with empty careers that did not fulfill us, from our opposition to “the man,” from our sharp memories of our parents’ toiling for 30 years just for the right to retire, from the simple desire not to live our lives in a state of invisibility, came the seeds of hope for something better. One characteristic of Millennial workers that grew from these seeds is the desire to experience as much as possible.  They are the “Experiential Employee”, with a passion for growing in diverse ways and expanding personal and professional horizons.  Rather than rooting themselves in a single company for a career, or even in a single career path, these employees are committed to building a broad portfolio of experiences and capabilities that will enable them to make a difference and to leave a mark of significance in the world.  How much richer is the organization that nurtures and leverages this inclination?  Our curmudgeonly ways must be surrendered and our focus redirected toward building the next generation of talent ecosystems, if we are to optimize what future generations have to offer.   Accelerating Professional Development In spite of our Boomer grumblings about Millennials’ “unrealistic” expectations, the truth is that we have a well-matched set of circumstances.  We have executives-in-waiting who want to learn quickly and a concurrent, urgent need to ramp up their development time, based on anticipated high levels of retirement in the next 10+ years.  Since we need to rapidly skill up these heirs to the corporate kingdom, isn’t it a fortunate coincidence that they are hungry to learn, develop and move fluidly throughout our organizations??  So our challenge now is to efficiently operationalize the wisdom we have acquired about effective learning and development.   We have already evolved from classroom-based models to diverse instructional methods.  The next step is to find the best approaches to help younger employees learn quickly and apply new learnings in an impactful way.   Creating temporary or even permanent functional partnerships among Millennial employees is one way to maximize outcomes.  This might take the form of 2 or more employees owning aspects of what once fell under a single role.  While one might argue this would mean duplication of resources, it could be a short term cost while employees come up to speed.  And the potential benefits would be numerous:  leveraging and validating the inherent sense of community of new generations, creating cross-functional skills with broad applicability, yielding additional perspectives and approaches to traditional work outcomes, and accelerating the performance curve for incumbents through Cooperative Learning (Johnson, D. and Johnson R., 1989, 1999).  This well-researched teaching strategy, where students support each other in the absorption and application of new information, has been shown to deliver faster, more efficient learning, and greater retention. Alternately, perhaps short term contracts with exiting retirees, or former retirees, to help facilitate the development of following generations may have merit.  Again, a short term cost, certainly.  However, the gains realized in shortening the learning curve, and strengthening engagement are substantial and lasting. Ultimately, there needs to be creative thinking applied for each organization on how to accelerate the capabilities of our future leaders in unique ways that mesh with current culture. The manner in which performance is evaluated must finally shift as well.  Employees will need to be assessed on how well they have developed key skills and capabilities vs. end-to-end mastery of functional positions they have no interest in keeping for an entire career. As we become more comfortable in placing greater and greater weight on competencies vs. tasks, we will realize increased organizational agility via this new generation of workers, which will be further enhanced by their natural flexibility and appetite for change. Revisiting Succession  For many years, organizations have failed to deliver desired succession planning outcomes.  According to CEB’s 2013 research, only 28% of current leaders were pre-identified in a succession plan. These disappointing results, along with the entrance of the experiential, Millennial employee into the workforce, may just provide the needed impetus for HR to reinvent succession processes.   We have recognized that the best professional development efforts are not always linear, and the time has come to fully adopt this philosophy in regard to succession as well.  Paths to specific organizational roles will not look the same for newer generations who seek out unique learning opportunities, without consideration of a singular career destination.  Rather than charting particular jobs as precursors for key positions, the experiences and skills behind what makes an incumbent successful must become essential in succession mapping.  And the multitude of ways in which those experiences and skills may be acquired must be factored into the process, along with the individual employee’s level of learning agility. While this may seem daunting, it is necessary and long overdue.  We have talked about the criticality of competency-based succession, however, we have not lived up to our own rhetoric.  Many Boomers have experienced the same frustration in our careers; knowing we are capable of shining in a particular role, but being denied the opportunity due to how our career history lined up, on paper, with documented job requirements.  These requirements usually emphasized past jobs/titles and specific tasks, versus capabilities, drive and willingness (let alone determination) to learn new things.  How satisfying would it be for us to leave a legacy where such narrow thinking no longer applies and potential is amplified? Realizing Diversity Another bloom from the seeds we Boomers have tried to plant over the past decades is a completely evolved view of diversity.  Millennial employees assume a diverse workforce, and are startled by anything less.  Their social tolerance, nurtured by wide and diverse networks, is unprecedented.  College graduates expect a similar landscape in the “real world” to what they experienced throughout their lives.  They appreciate and seek out divergent points of view and experiences without needing any persuasion.  The face of our U.S. workforce will likely see dramatic change as Millennials apply their fresh take on hiring and building strong teams, with an inherent sense of inclusion.  This wonderful aspect of the Millennial wave should be celebrated and strongly encouraged, as it is the fulfillment of our own aspirations. Future Perfect The Experiential Employee is operating more as a free agent than a long term player, and their commitment will essentially last as long as meaningful organizational culture and personal/professional opportunities keep their interest.  As Boomers, we have laid the foundation for this new, spirited employment attitude, and we should take pride in knowing that.  Generations to come will challenge organizations to excel in how they identify, manage and nurture talent. Let’s support and revel in the future that we’ve helped invent, rather than lament what we think has been lost.  After all, the future is always connected to the past.  And as so eloquently phrased by Antoine Lavoisier, French nobleman, chemist and politico:  “Nothing is Lost, Nothing is Created, and Everything is Transformed.” Christine has over 25 years of diverse HR experience.  She has held HR consulting and corporate roles, including CHRO positions for Echostar in Denver, a 6,000+ employee global engineering firm, and Aepona, a startup software firm, successfully acquired by Intel. Christine is a resource to Oracle clients, to assist in Human Capital Management strategy development and implementation, compensation practices, talent development initiatives, employee engagement, global HR management, and integrated HR systems and processes that support the full employee lifecycle. 

    Read the article

  • capturing video from ip camera

    - by Ruby
    I am trying to capture video from ip camera into my application , its giving exception com.sun.image.codec.jpeg.ImageFormatException: Not a JPEG file: starts with 0x0d 0x0a at sun.awt.image.codec.JPEGImageDecoderImpl.readJPEGStream(Native Method) at sun.awt.image.codec.JPEGImageDecoderImpl.decodeAsBufferedImage(Unknown Source) at test.AxisCamera1.readJPG(AxisCamera1.java:130) at test.AxisCamera1.readMJPGStream(AxisCamera1.java:121) at test.AxisCamera1.readStream(AxisCamera1.java:100) at test.AxisCamera1.run(AxisCamera1.java:171) at java.lang.Thread.run(Unknown Source) its giving exception at image = decoder.decodeAsBufferedImage(); Here is the code i am trying private static final long serialVersionUID = 1L; public boolean useMJPGStream = true; public String jpgURL = "http://ip here/video.cgi/jpg/image.cgi?resolution=640×480"; public String mjpgURL = "http://ip here /video.cgi/mjpg/video.cgi?resolution=640×480"; DataInputStream dis; private BufferedImage image = null; public Dimension imageSize = null; public boolean connected = false; private boolean initCompleted = false; HttpURLConnection huc = null; Component parent; /** Creates a new instance of AxisCamera */ public AxisCamera1(Component parent_) { parent = parent_; } public void connect() { try { URL u = new URL(useMJPGStream ? mjpgURL : jpgURL); huc = (HttpURLConnection) u.openConnection(); // System.out.println(huc.getContentType()); InputStream is = huc.getInputStream(); connected = true; BufferedInputStream bis = new BufferedInputStream(is); dis = new DataInputStream(bis); if (!initCompleted) initDisplay(); } catch (IOException e) { // incase no connection exists wait and try // again, instead of printing the error try { huc.disconnect(); Thread.sleep(60); } catch (InterruptedException ie) { huc.disconnect(); connect(); } connect(); } catch (Exception e) { ; } } public void initDisplay() { // setup the display if (useMJPGStream) readMJPGStream(); else { readJPG(); disconnect(); } imageSize = new Dimension(image.getWidth(this), image.getHeight(this)); setPreferredSize(imageSize); parent.setSize(imageSize); parent.validate(); initCompleted = true; } public void disconnect() { try { if (connected) { dis.close(); connected = false; } } catch (Exception e) { ; } } public void paint(Graphics g) { // used to set the image on the panel if (image != null) g.drawImage(image, 0, 0, this); } public void readStream() { // the basic method to continuously read the // stream try { if (useMJPGStream) { while (true) { readMJPGStream(); parent.repaint(); } } else { while (true) { connect(); readJPG(); parent.repaint(); disconnect(); } } } catch (Exception e) { ; } } public void readMJPGStream() { // preprocess the mjpg stream to remove the // mjpg encapsulation readLine(3, dis); // discard the first 3 lines readJPG(); readLine(2, dis); // discard the last two lines } public void readJPG() { // read the embedded jpeg image try { JPEGImageDecoder decoder = JPEGCodec.createJPEGDecoder(dis); image = decoder.decodeAsBufferedImage(); } catch (Exception e) { e.printStackTrace(); disconnect(); } } public void readLine(int n, DataInputStream dis) { // used to strip out the // header lines for (int i = 0; i < n; i++) { readLine(dis); } } public void readLine(DataInputStream dis) { try { boolean end = false; String lineEnd = "\n"; // assumes that the end of the line is marked // with this byte[] lineEndBytes = lineEnd.getBytes(); byte[] byteBuf = new byte[lineEndBytes.length]; while (!end) { dis.read(byteBuf, 0, lineEndBytes.length); String t = new String(byteBuf); System.out.print(t); // uncomment if you want to see what the // lines actually look like if (t.equals(lineEnd)) end = true; } } catch (Exception e) { e.printStackTrace(); } } public void run() { System.out.println("in Run..................."); connect(); readStream(); } @SuppressWarnings("deprecation") public static void main(String[] args) { JFrame jframe = new JFrame(); jframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); AxisCamera1 axPanel = new AxisCamera1(jframe); new Thread(axPanel).start(); jframe.getContentPane().add(axPanel); jframe.pack(); jframe.show(); } } Any suggestions what I am doing wrong here??

    Read the article

  • Oddities when mixing Zend Framework 1.11 & Doctrine 2 Autoloaders

    - by jiewmeng
    I have setup autoloading in my ZF/Doctrine2 app as follows $zendAutoloader = Zend_Loader_Autoloader::getInstance(); $autoloader = array(new ClassLoader('Symfony'), 'loadClass'); $zendAutoloader->pushAutoloader($autoloader, 'Symfony'); $autoloader = array(new ClassLoader('Doctrine'), 'loadClass'); $zendAutoloader->pushAutoloader($autoloader, 'Doctrine'); $autoloader = array(new ClassLoader('Application', realpath(__DIR__ . '/..')), 'loadClass'); $zendAutoloader->pushAutoloader($autoloader, 'Application'); $autoloader = array(new ClassLoader('DoctrineExtensions'), 'loadClass'); $zendAutoloader->pushAutoloader($autoloader, 'DoctrineExtensions'); I find that the DoctrineExtensions autoloading is not working while other classes are ... to verify that the path etc are right, I tried $autoloader = new ClassLoader('DoctrineExtensions'); $autoloader->register(); And it works. So it seems it has something to do with Zend Framework?

    Read the article

  • C# streamreader, delimiter problem.

    - by Mike
    What I have is a txt file that is huge, 60MB. I need to read each line and produce a file, split based on a delimiter. I'm having no issue reading the file or producing the file, my complication comes from the delimiter, it can't see the delimiter. If anybody could offer a suggestion on how to read that delimiter I would be so grateful. delimiter = Ç public void file1() { string betaFilePath = @"C:\dtable.txt"; StringBuilder sb = new StringBuilder(); using (FileStream fs = new FileStream(betaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { string[] betaFileLine = rdr.ReadLine().Split('Ç'); { sb.AppendLine(betaFileLine[0] + "ç" + betaFileLine[1] + betaFileLine[2] + "ç" + betaFileLine[3] + "ç" + betaFileLine[4] + "ç" + betaFileLine[5] + "ç" + betaFileLine[6] + "ç" + betaFileLine[7] + "ç" + betaFileLine[8] + "ç" + betaFileLine[9] + "ç" + betaFileLine[10] + "ç"); } } } using (FileStream fs = new FileStream(@"C:\testarea\load1.txt", FileMode.Create)) using (StreamWriter writer = new StreamWriter(fs)) { writer.Write(sb.ToString()); } }

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • code for TouchPad works, but not for DPAD ...please help me to fix this..

    - by Chandan
    package org.coe.twoD; import android.app.Activity; import android.content.Context; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; //import android.graphics.Path; import android.graphics.Rect; //import android.graphics.RectF; import android.os.Bundle; //import android.util.Log; import android.util.Log; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.View; import android.view.View.OnClickListener; public class TwoD extends Activity implements OnClickListener { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); View draw2d = findViewById(R.id.draw_button); draw2d.setOnClickListener(this); } public void onClick(View v) { if (R.id.draw_button == v.getId()) { setContentView(new draw2D(this)); } } public class draw2D extends View { private static final String TAG = "Sudoku"; private float width; // width of one tile private float height; // height of one tile private int selX; // X index of selection private int selY; // Y index of selection private final Rect selRect = new Rect(); public draw2D(Context context) { super(context); } @Override protected void onSizeChanged(int w, int h, int oldw, int oldh) { width = w / 9f; height = h / 9f; getRect(selX, selY, selRect); Log.d(TAG, "onSizeChanged: width " + width + ", height " + height); super.onSizeChanged(w, h, oldw, oldh); } @Override protected void onDraw(Canvas canvas) { // Draw the background... Paint background = new Paint(); background.setColor(getResources().getColor(R.color.background)); canvas.drawRect(0, 0, getWidth(), getHeight(), background); // Draw the board... // Define colors for the grid lines Paint dark = new Paint(); dark.setColor(getResources().getColor(R.color.dark)); Paint hilite = new Paint(); hilite.setColor(getResources().getColor(R.color.hilite)); Paint light = new Paint(); light.setColor(getResources().getColor(R.color.light)); // Draw the minor grid lines for (int i = 0; i < 9; i++) { canvas.drawLine(0, i * height, getWidth(), i * height, light); canvas.drawLine(0, i * height + 1, getWidth(), i * height + 1, hilite); canvas.drawLine(i * width, 0, i * width, getHeight(), light); canvas.drawLine(i * width + 1, 0, i * width + 1, getHeight(), hilite); } // Draw the major grid lines for (int i = 0; i < 9; i++) { if (i % 3 != 0) continue; canvas.drawLine(0, i * height, getWidth(), i * height, dark); canvas.drawLine(0, i * height + 1, getWidth(), i * height + 1, hilite); canvas.drawLine(i * width, 0, i * width, getHeight(), dark); canvas.drawLine(i * width + 1, 0, i * width + 1, getHeight(), hilite); } /* * dark.setColor(Color.MAGENTA); Path circle= new Path(); * circle.addCircle(150, 150, 100, Path.Direction.CW); * canvas.drawPath(circle, dark); * * * Path rect=new Path(); * * RectF rectf= new RectF(150,200,250,300); rect.addRect(rectf, * Path.Direction.CW); canvas.drawPath(rect, dark); * * * canvas.drawRect(0, 0,250, 250, dark); * * * canvas.drawText("Hello", 200,200, dark); */ Paint selected = new Paint(); selected.setColor(Color.GREEN); canvas.drawRect(selRect, selected); } /* * public boolean onTouchEvent(MotionEvent event){ * if(event.getAction()!=MotionEvent.ACTION_DOWN) return * super.onTouchEvent(event); * select((int)(event.getX()/width),(int)(event.getY()/height)); * * * return true; } */ private void select(int x, int y) { invalidate(selRect); selX = Math.min(Math.max(x, 0), 8); selY = Math.min(Math.max(y, 0), 8); getRect(selX, selY, selRect); invalidate(selRect); } @Override public boolean onKeyUp(int keyCode, KeyEvent event) { return super.onKeyUp(keyCode, event); } @Override public boolean onTouchEvent(MotionEvent event) { if (event.getAction() != MotionEvent.ACTION_DOWN) return super.onTouchEvent(event); select((int) (event.getX() / width), (int) (event.getY() / height)); // game.showKeypadOrError(selX, selY); Log.d(TAG, "onTouchEvent: x " + selX + ", y " + selY); return true; } @Override public boolean onKeyDown(int keyCode, KeyEvent event) { Log.d(TAG, "onKeyDown: keycode=" + keyCode + ", event=" + event); switch (keyCode) { case KeyEvent.KEYCODE_DPAD_UP: select(selX, selY - 1); break; case KeyEvent.KEYCODE_DPAD_DOWN: select(selX, selY + 1); break; case KeyEvent.KEYCODE_DPAD_LEFT: select(selX - 1, selY); break; case KeyEvent.KEYCODE_DPAD_RIGHT: select(selX + 1, selY); break; default: return super.onKeyDown(keyCode, event); } return true; } private void getRect(int x, int y, Rect rect) { rect.set((int) (x * width), (int) (y * height), (int) (x * width + width), (int) (y * height + height)); } } }

    Read the article

  • shared libraries creating a soft link

    - by robUK
    Hello, Redhat 5.5 gcc version 4.1.2 I have a directory call lib, and in that directory I have all the shared libraries (about 30) that we get from our customer as we use their API. We link with this API. directory structure: /usr/CSAPI/lib However, our customer will update their API so we get new libraries, normally about 3 or 4. What I have been doing is when I get new libraries. Is to remove the old one and put in another directory. And replace them with the new libaries in the lib directory. /usr/CSAPI/Old_libs The new and old will have the same name. i.e. libcs.so < old libcs.so < new Is there a better way to manage this? I was thinking of creating a soft line, but as the names are the same, I am not sure that this will work. Many thanks,

    Read the article

  • Is it possible to replace groovy method for existing object?

    - by Jean Barmash
    The following code tried to replace an existing method in a Groovy class: class A { void abc() { println "original" } } x= new A() x.abc() A.metaClass.abc={-> println "new" } x.abc() A.metaClass.methods.findAll{it.name=="abc"}.each { println "Method $it"} new A().abc() And it results in the following output: original original Method org.codehaus.groovy.runtime.metaclass.ClosureMetaMethod@103074e[name: abc params: [] returns: class java.lang.Object owner: class A] Method public void A.abc() new Does this mean that when modify the metaclass by setting it to closure, it doesn't really replace it but just adds another method it can call, thus resulting in metaclass having two methods? Is it possible to truly replace the method so the second line of output prints "new"? When trying to figure it out, I found that DelegatingMetaClass might help - is that the most Groovy way to do this?

    Read the article

  • Unable to keep the connecting using a wireless bridge

    - by dan
    I am running Ubuntu 12.04 on a dell inspiron desktop (core 2 duo) and am using wicd to manage my network/wifi. I've found that the WiFi card in the machine has trouble staying connected to my router (I believe this is a function of distance between the two), so I've taken an old Belkin F5d7231 wireless router and installed dd-wrt on it to use as a wireless bridge hoping that it will have better reception. I think everything up through the wireless bridge is working OK since I have no problems accessing the internet through it with my MacBook. The problem arises when I try to hook the ubuntu machine up to the wireless bridge. It will connect for a few minutes, but it will quickly disconnect without clear triggering event; it may be more likely to disconnect if there is a heavy traffic load going over it (could be something as simple as "cat big_text_file" in an ssh session). I've tried switching from dhclient to dhcpcd without much improvement. Here is the output from the syslog when it connects: Jun 30 17:10:08 Chicabuntu dhcpcd[28278]: wlan1: dhcpcd not running Jun 30 17:10:08 Chicabuntu dhcpcd[28278]: wlan1: exiting Jun 30 17:10:08 Chicabuntu dhcpcd[28312]: eth0: dhcpcd not running Jun 30 17:10:08 Chicabuntu dhcpcd[28312]: eth0: exiting Jun 30 17:10:08 Chicabuntu avahi-daemon[1041]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 30 17:10:08 Chicabuntu avahi-daemon[1041]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:10:08 Chicabuntu avahi-daemon[1041]: Withdrawing address record for fe80::21c:c4ff:fe31:1a83 on eth0. Jun 30 17:10:08 Chicabuntu kernel: [15184.976127] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:10:08 Chicabuntu kernel: [15185.010805] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:10:08 Chicabuntu dhcpcd[28347]: eth0: dhcpcd not running Jun 30 17:10:08 Chicabuntu dhcpcd[28347]: eth0: exiting Jun 30 17:10:08 Chicabuntu kernel: [15185.180156] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:10:08 Chicabuntu kernel: [15185.212785] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:10:10 Chicabuntu kernel: [15187.027445] tg3 0000:3f:00.0: eth0: Link is up at 100 Mbps, full duplex Jun 30 17:10:10 Chicabuntu kernel: [15187.027452] tg3 0000:3f:00.0: eth0: Flow control is on for TX and on for RX Jun 30 17:10:10 Chicabuntu kernel: [15187.028300] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: dhcpcd 3.2.3 starting Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: hardware address = 00:1c:c4:31:1a:83 Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: DUID = 00:01:00:01:17:81:85:79:00:1c:c4:31:1a:83 Jun 30 17:10:10 Chicabuntu dhcpcd[28353]: eth0: broadcasting for a lease Jun 30 17:10:11 Chicabuntu avahi-daemon[1041]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:10:11 Chicabuntu avahi-daemon[1041]: New relevant interface eth0.IPv6 for mDNS. Jun 30 17:10:11 Chicabuntu avahi-daemon[1041]: Registering new address record for fe80::21c:c4ff:fe31:1a83 on eth0.*. Jun 30 17:10:20 Chicabuntu kernel: [15197.568016] eth0: no IPv6 routers present Jun 30 17:10:29 Chicabuntu dhcpcd[28353]: eth0: offered 192.168.1.111 from 192.168.1.254 Jun 30 17:10:29 Chicabuntu dhcpcd[28353]: eth0: checking 192.168.1.111 is available on attached networks Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: leased 192.168.1.111 for 86400 seconds Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: adding IP address 192.168.1.111/24 Jun 30 17:10:30 Chicabuntu avahi-daemon[1041]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.1.111. Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: adding default route via 192.168.1.254 metric 0 Jun 30 17:10:30 Chicabuntu dhcpcd[28353]: eth0: exiting Jun 30 17:10:30 Chicabuntu avahi-daemon[1041]: New relevant interface eth0.IPv4 for mDNS. Jun 30 17:10:30 Chicabuntu avahi-daemon[1041]: Registering new address record for 192.168.1.111 on eth0.IPv4. Jun 30 17:10:30 Chicabuntu dhcpcd.sh: interface eth0 has been configured with new IP=192.168.1.111 Jun 30 17:10:39 Chicabuntu ntpdate[28439]: adjust time server 91.189.94.4 offset 0.001915 sec And here is the syslog from when it shuts down the connection without reason: Jun 30 17:12:15 Chicabuntu kernel: [15312.575455] tg3 0000:3f:00.0: eth0: Link is down Jun 30 17:12:16 Chicabuntu dhcpcd[28603]: eth0: sending signal 1 to pid 28361 Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: received SIGHUP, releasing lease Jun 30 17:12:16 Chicabuntu dhcpcd[28603]: eth0: exiting Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Withdrawing address record for 192.168.1.111 on eth0. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.1.111. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: removing default route via 192.168.1.254 metric 0 Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:12:16 Chicabuntu avahi-daemon[1041]: Withdrawing address record for fe80::21c:c4ff:fe31:1a83 on eth0. Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: netlink: No such process Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: removing IP address 192.168.1.111/24 Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: netlink: Cannot assign requested address Jun 30 17:12:16 Chicabuntu dhcpcd[28361]: eth0: exiting Jun 30 17:12:16 Chicabuntu dhcpcd.sh: interface eth0 has been brought down Jun 30 17:12:17 Chicabuntu kernel: [15313.612141] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:12:17 Chicabuntu kernel: [15313.644703] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:12:17 Chicabuntu dhcpcd[28674]: wlan1: dhcpcd not running Jun 30 17:12:17 Chicabuntu dhcpcd[28674]: wlan1: exiting Jun 30 17:12:17 Chicabuntu dhcpcd[28708]: eth0: dhcpcd not running Jun 30 17:12:17 Chicabuntu dhcpcd[28708]: eth0: exiting Jun 30 17:12:17 Chicabuntu kernel: [15313.912147] tg3 0000:3f:00.0: irq 44 for MSI/MSI-X Jun 30 17:12:17 Chicabuntu kernel: [15313.944746] ADDRCONF(NETDEV_UP): eth0: link is not ready Jun 30 17:12:18 Chicabuntu kernel: [15315.592569] tg3 0000:3f:00.0: eth0: Link is up at 100 Mbps, full duplex Jun 30 17:12:18 Chicabuntu kernel: [15315.592576] tg3 0000:3f:00.0: eth0: Flow control is on for TX and on for RX Jun 30 17:12:18 Chicabuntu kernel: [15315.593399] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 30 17:12:20 Chicabuntu avahi-daemon[1041]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::21c:c4ff:fe31:1a83. Jun 30 17:12:20 Chicabuntu avahi-daemon[1041]: New relevant interface eth0.IPv6 for mDNS. Jun 30 17:12:20 Chicabuntu avahi-daemon[1041]: Registering new address record for fe80::21c:c4ff:fe31:1a83 on eth0.*. Jun 30 17:12:29 Chicabuntu kernel: [15325.680019] eth0: no IPv6 routers present If this isn't useful, I can also post the wicd log, but that is kind of long. If anyone could help me I would be eternally grateful.

    Read the article

  • Why does a function that takes IEnumerable<interface> not accept IEnumerable<class>?

    - by Matt Whitfield
    Say, for instance, I have a class: public class MyFoo : IMyBar { ... } Then, I would want to use the following code: List<MyFoo> classList = new List<MyFoo>(); classList.Add(new MyFoo(1)); classList.Add(new MyFoo(2)); classList.Add(new MyFoo(3)); List<IMyBar> interfaceList = new List<IMyBar>(classList); But this produces the error: `Argument '1': cannot convert from 'IEnumerable<MyFoo>' to 'IEnumerable<IMyBar>' Why is this? Since MyFoo implements IMyBar, one would expect that an IEnumerable of MyFoo could be treated as an IEnumerable of IMyBar. A mundane real-world example being producing a list of cars, and then being told that it wasn't a list of vehicles. It's only a minor annoyance, but if anyone can shed some light on this, I would be much obliged.

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Silverlight 4 Twitter Client - Part 2

    - by Max
    We will create a few classes now to help us with storing and retrieving user credentials, so that we don't ask for it every time we want to speak with Twitter for getting some information. Now the class to sorting out the credentials. We will have this class as a static so as to ensure one instance of the same. This class is mainly going to include a getter setter for username and password, a method to check if the user if logged in and another one to log out the user. You can get the code here. Now let us create another class to facilitate easy retrieval from twitter xml format results for any queries we make. This basically involves just creating a getter setter for all the values that you would like to retrieve from the xml document returned. You can get the format of the xml document from here. Here is what I've in my Status.cs data structure class. using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes;  namespace MaxTwitter.Classes { public class Status { public Status() {} public string ID { get; set; } public string Text { get; set; } public string Source { get; set; } public string UserID { get; set; } public string UserName { get; set; } } }  Now let us looking into implementing the Login.xaml.cs, first thing here is if the user is already logged in, we need to redirect the user to the homepage, this we can accomplish using the event OnNavigatedTo, which is fired when the user navigates to this particular Login page. Here you utilize the navigate to method of NavigationService to goto a different page if the user is already logged in. if (GlobalVariable.isLoggedin())         this.NavigationService.Navigate(new Uri("/Home", UriKind.Relative));  On the submit button click event, add the new event handler, which would save the perform the WebClient request and download the results as xml string. WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp);  The following line allows us to create a web client to create a web request to a url and get back the string response. Something that came as a great news with SL 4 for many SL developers.   WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential(TwitterUsername.Text, TwitterPassword.Password);  Here in the following line, we add an event that has to be fired once the xml string has been downloaded. Here you can do all your XLINQ stuff.   myService.DownloadStringCompleted += new DownloadStringCompletedEventHandler(TimelineRequestCompleted);   myService.DownloadStringAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml"));  Now let us look at implementing the TimelineRequestCompleted event. Here we are not actually using the string response we get from twitter, I just use it to ensure the user is authenticated successfully and then save the credentials and redirect to home page. public void TimelineRequestCompleted(object sender, System.Net.DownloadStringCompletedEventArgs e) { if (e.Error != null) { MessageBox.Show("This application must be installed first"); }  If there is no error, we can save the credentials to reuse it later.   else { GlobalVariable.saveCredentials(TwitterUsername.Text, TwitterPassword.Password); this.NavigationService.Navigate(new System.Uri("/Home", UriKind.Relative)); } } Ok so now login page is done. Now the main thing – running this application. This credentials stuff would only work, if the application is run out of the browser. So we need fiddle with a few Silverlioght project settings to enable this. Here is how:    Right click on Silverlight > properties then check the "Enable running application out of browser".    Then click on Out-Of-Browser settings and check "Require elevated trust…" option. That's it, all done to run. Now press F5 to run the application, fix the errors if any. Then once the application opens up in browser with the login page, right click and choose install.  Once you install, it would automatically run and you can login and can see that you are redirected to the Home page. Here are the files that are related to this posts. We will look at implementing the Home page, etc… in the next post. Please post your comments and feedbacks; it would greatly help me in improving my posts!  Thanks for your time, catch you soon.

    Read the article

  • Team Foundation Server (TFS) Team Build Custom Activity C# Code for Assembly Stamping

    - by Bob Hardister
    For the full context and guidance on how to develop and implement a custom activity in Team Build see the Microsoft Visual Studio Rangers Team Foundation Build Customization Guide V.1 at http://vsarbuildguide.codeplex.com/ There are many ways to stamp or set the version number of your assemblies. This approach is based on the build number.   namespace CustomActivities { using System; using System.Activities; using System.IO; using System.Text.RegularExpressions; using Microsoft.TeamFoundation.Build.Client; [BuildActivity(HostEnvironmentOption.Agent)] public sealed class VersionAssemblies : CodeActivity { /// <summary> /// AssemblyInfoFileMask /// </summary> [RequiredArgument] public InArgument<string> AssemblyInfoFileMask { get; set; } /// <summary> /// SourcesDirectory /// </summary> [RequiredArgument] public InArgument<string> SourcesDirectory { get; set; } /// <summary> /// BuildNumber /// </summary> [RequiredArgument] public InArgument<string> BuildNumber { get; set; } /// <summary> /// BuildDirectory /// </summary> [RequiredArgument] public InArgument<string> BuildDirectory { get; set; } /// <summary> /// Publishes field values to the build report /// </summary> public OutArgument<string> DiagnosticTextOut { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { // Obtain the runtime value of the input arguments string sourcesDirectory = context.GetValue(this.SourcesDirectory); string assemblyInfoFileMask = context.GetValue(this.AssemblyInfoFileMask); string buildNumber = context.GetValue(this.BuildNumber); string buildDirectory = context.GetValue(this.BuildDirectory); // ** Determine the version number values ** // Note: the format used here is: major.secondary.maintenance.build // ----------------------------------------------------------------- // Obtain the build definition name int nameStart = buildDirectory.LastIndexOf(@"\") + 1; string buildDefinitionName = buildDirectory.Substring(nameStart); // Set the primary.secondary.maintenance values // NOTE: these are hard coded in this example, but could be sourced from a file or parsed from a build definition name that includes them string p = "1"; string s = "5"; string m = "2"; // Initialize the build number string b; string na = "0"; // used for Assembly and Product Version instead of build number (see versioning best practices: **TBD reference) // Set qualifying product version information string productInfo = "RC2"; // Obtain the build increment number from the build number // NOTE: this code assumes the default build definition name format int buildIncrementNumberDelimterIndex = buildNumber.LastIndexOf("."); b = buildNumber.Substring(buildIncrementNumberDelimterIndex + 1); // Convert version to integer values int pVer = Convert.ToInt16(p); int sVer = Convert.ToInt16(s); int mVer = Convert.ToInt16(m); int bNum = Convert.ToInt16(b); int naNum = Convert.ToInt16(na); // ** Get all AssemblyInfo files and stamp them ** // Note: the mapping of AssemblyInfo.cs attributes to assembly display properties are as follows: // - AssemblyVersion = Assembly Version - used for the assembly version (does not change unless p, s or m values are changed) // - AssemblyFileVersion = File Version - used for the file version (changes with every build) // - AssemblyInformationalVersion = Product Version - used for the product version (can include additional version information) // ------------------------------------------------------------------------------------------------------------------------------------------------ Version assemblyVersion = new Version(pVer, sVer, mVer, naNum); Version newAssemblyFileVersion = new Version(pVer, sVer, mVer, bNum); Version productVersion = new Version(pVer, sVer, mVer); // Setup diagnostic fields int numberOfReplacements = 0; string addedAssemblyInformationalAttribute = "No"; // Enumerate over the assemblyInfo version attributes foreach (string attribute in new[] { "AssemblyVersion", "AssemblyFileVersion", "AssemblyInformationalVersion" }) { // Define the regular expression to find in each and every Assemblyinfo.cs files (which is for example 'AssemblyVersion("1.0.0.0")' ) Regex regex = new Regex(attribute + @"\(""\d+\.\d+\.\d+\.\d+""\)"); foreach (string file in Directory.EnumerateFiles(sourcesDirectory, assemblyInfoFileMask, SearchOption.AllDirectories)) { string text = File.ReadAllText(file); // Read the text from the AssemblyInfo file // If the AsemblyInformationalVersion attribute is not in the file, add it as the last line of the file // Note: by default the AssemblyInfo.cs files will not contain the AssemblyInformationalVersion attribute if (!text.Contains("[assembly: AssemblyInformationalVersion(\"")) { string lastLine = Environment.NewLine + "[assembly: AssemblyInformationalVersion(\"1.0.0.0\")]"; text = text + lastLine; addedAssemblyInformationalAttribute = "Yes"; } // Search for the expression Match match = regex.Match(text); if (match.Success) { // Get file attributes FileAttributes fileAttributes = File.GetAttributes(file); // Set file to read only File.SetAttributes(file, fileAttributes & ~FileAttributes.ReadOnly); // Insert AssemblyInformationalVersion attribute into the file text if does not already exist string newText = string.Empty; if (attribute == "AssemblyVersion") { newText = regex.Replace(text, attribute + "(\"" + assemblyVersion + "\")"); numberOfReplacements++; } if (attribute == "AssemblyFileVersion") { newText = regex.Replace(text, attribute + "(\"" + newAssemblyFileVersion + "\")"); numberOfReplacements++; } if (attribute == "AssemblyInformationalVersion") { newText = regex.Replace(text, attribute + "(\"" + productVersion + " " + productInfo + "\")"); numberOfReplacements++; } // Publish diagnostics to build report (diagnostic verbosity only) context.SetValue(this.DiagnosticTextOut, " Added AssemblyInformational Attribute: " + addedAssemblyInformationalAttribute + " Number of replacements: " + numberOfReplacements + " Build number: " + buildNumber + " Build directory: " + buildDirectory + " Build definition name: " + buildDefinitionName + " Assembly version: " + assemblyVersion + " New file version: " + newAssemblyFileVersion + " Product version: " + productVersion + " AssemblyInfo.cs Text Last Stamped: " + newText); // Write the new text in the AssemblyInfo file File.WriteAllText(file, newText); // restore the file's original attributes File.SetAttributes(file, fileAttributes); } } } } } }

    Read the article

  • How does this C# asp.net random password code work?

    - by quakkels
    Hello all, I'm new to .NET and C# and I'm trying to figure out how this code works: public static string CreateRandomPassword(int PasswordLength) { String _allowedChars = "abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNOPQRSTUVWXYZ23456789"; Byte[] randomBytes = new Byte[PasswordLength]; RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider(); rng.GetBytes(randomBytes); char[] chars = new char[PasswordLength]; int allowedCharCount = _allowedChars.Length; for(int i = 0;i<PasswordLength;i++) { /// /// I don't understand how this line works: /// chars[i] = _allowedChars[(int)randomBytes[i] % allowedCharCount]; } return new string(chars); } I think I've got a pretty good handle on most of this. I haven't been able to understand the following line: chars[i] = _allowedChars[(int)randomBytes[i] % allowedCharCount]; I understand that the code generates random binary numbers and uses those random numbers in the for loop to select a character from the _allowedChars string. What I don't get is why this code uses the modulous operator (%) to get the _allowedChars index value. Thanks for any help

    Read the article

  • The Madness of March

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} From “Linsanity” to “LOB City”, there is no doubt that basketball dominates the month of March. As many are aware, March Madness is well underway and continues to be a time when college basketball teams get together to bring their A-game to the court. Here at Oracle we also like to bring our A-game, and that includes some new players and talent from our newly acquired companies. Each new acquisition expands Oracle’s solution portfolio, fills customer requirements, and ultimately brings greater opportunities for partners. OPN follows a consistent approach to delivering key information about these acquisitions to you in a timely manner. We do this so partners can get educated, get trained and gain access to demand gen and sales tools. Through this slam dunk of a process we provide (using Pillar Data Systems as an example): A welcome page where partners can download information and learn how to sell and maximize sales returns. A Discovery section where partners can listen to key Oracle Executives speak about the many benefits this new solution brings, as well review a FAQ sheet. A Prepare section where partners can learn about the product strategies and the different OPN Knowledge Zones that have become available. A Sell and Deliver section that partners can leverage when discussing product positioning and functionality, as well as gain access to relevant deliverables. Just as any competitive team strives to be #1, Oracle also wants to stay best-in-class which is why we have recently joined forces with some ‘baller’ companies such as RightNow, Endeca and Pillar Axiom to secure our place in the industry bracket. By running our 3-2 Oracle play and bringing in our newly acquired products, we are able to deliver a solid, expanded solution to our partners. These and many other MVP companies have helped Oracle broaden its offerings and score big. Watch the half time show below to find out what Judson thinks about Oracle’s current offerings: Mergers and acquisitions are a strategic part of how we currently go to market. If you haven’t done so already, dribble down or post up and visit the Acquisition Catalog to learn more about Oracle’s acquired products and the unique benefits they can bring to your own court. Or click here to learn about the ways of monetizing opportunities through Oracle acquisitions. Until Next Time, It’s Game Time, The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to force javax xslt transformer to encode entities in utf-8?

    - by calavera.info
    I'm working on filter that should transform an output with some stylesheet. Important sections of code looks like this: PrintWriter out = response.getWriter(); ... StringReader sr = new StringReader(content); Source xmlSource = new StreamSource(sr, requestSystemId); transformer.setOutputProperty(OutputKeys.ENCODING, "UTF-8"); transformer.setParameter("encoding", "UTF-8"); //same result when using ByteArrayOutputStream xo = new java.io.ByteArrayOutputStream(); StringWriter xo = new StringWriter(); StreamResult result = new StreamResult(xo); transformer.transform(xmlSource, result); out.write(xo.toString()); The problem is that national characters are encoded as html entities and not by using UTF. Is there any way to force transformer to use UTF-8 instead of entities?

    Read the article

  • reading excell file in vb.net

    - by Mark
    can anyone help me on how to know EOF of excel using vb.net? i have this code but it crash when i try delete the proceeding rows from row 6 to downward. my problem is, my code was still reading a null values of rows that i deleted in excel.. this is my code: Dim xlsConn As New OleDbConnection Dim xlsAdapter As New OleDbDataAdapter Dim xlsDataSet As New DataSet xlsConn = New OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" & pathName & " ; Extended Properties=Excel 8.0") strSQL = "SELECT * FROM [Sheet1$]" xlsAdapter.SelectCommand = New OleDbCommand(strSQL, xlsConn) xlsDataSet.Clear() xlsAdapter.Fill(xlsDataSet) ListView1.Items.Clear() Dim listItem As ListViewItem For ctr As Integer = 0 To xlsDataSet.Tables(0).Rows.Count - 1 listItem = ListView1.Items.Add(xlsDataSet.Tables(0).Rows(ctr).Item("EmpNo").ToString) listItem.SubItems.Add(xlsDataSet.Tables(0).Rows(ctr).Item("EmpName").ToString) Next Can anyone help me to fix this bugs!

    Read the article

  • Cant create 2nd textbox

    - by okinaw55
    Having problems with this code and cant figure out why. It works fine the first time through but crashes with a Parameter is not Valid error the 2nd time on this line. Dim tbx As TextBox = New Windows.Forms.TextBox Any help is appreciated. Dim tbx As TextBox = New Windows.Forms.TextBox tbx.Name = tbxName tbx.Size = New System.Drawing.Size(55, 12) tbx.BorderStyle = BorderStyle.None tbx.TextAlign = HorizontalAlignment.Center Using f As Font = tbx.Font tbx.Font = New Font(f.FontFamily, 8, FontStyle.Bold) End Using tbx.Location = New System.Drawing.Point(xCords, 44) Select Case tbx.Name Case "tbxBulk01" : tbx.Text = Bulk01Label Case "tbxBulk02" : tbx.Text = Bulk02Label End Select Me.Controls.Add(tbx) Stack trace as follows. at System.Drawing.Font.GetHeight(Graphics graphics) at System.Drawing.Font.GetHeight() at System.Drawing.Font.get_Height() at System.Windows.Forms.Control.get_FontHeight() at System.Windows.Forms.TextBoxBase.get_PreferredHeight() at System.Windows.Forms.TextBoxBase.get_DefaultSize() at System.Windows.Forms.Control..ctor(Boolean autoInstallSyncContext) at System.Windows.Forms.TextBoxBase..ctor() at System.Windows.Forms.TextBox..ctor()

    Read the article

  • .NET Security Part 4

    - by Simon Cooper
    Finally, in this series, I am going to cover some of the security issues that can trip you up when using sandboxed appdomains. DISCLAIMER: I am not a security expert, and this is by no means an exhaustive list. If you actually are writing security-critical code, then get a proper security audit of your code by a professional. The examples below are just illustrations of the sort of things that can go wrong. 1. AppDomainSetup.ApplicationBase The most obvious one is the issue covered in the MSDN documentation on creating a sandbox, in step 3 – the sandboxed appdomain has the same ApplicationBase as the controlling appdomain. So let’s explore what happens when they are the same, and an exception is thrown. In the sandboxed assembly, Sandboxed.dll (IPlugin is an interface in a partially-trusted assembly, with a single MethodToDoThings on it): public class UntrustedPlugin : MarshalByRefObject, IPlugin { // implements IPlugin.MethodToDoThings() public void MethodToDoThings() { throw new EvilException(); } } [Serializable] internal class EvilException : Exception { public override string ToString() { // show we have read access to C:\Windows // read the first 5 directories Console.WriteLine("Pwned! Mwuahahah!"); foreach (var d in Directory.EnumerateDirectories(@"C:\Windows").Take(5)) { Console.WriteLine(d.FullName); } return base.ToString(); } } And in the controlling assembly: // what can possibly go wrong? AppDomainSetup appDomainSetup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase } // only grant permissions to execute // and to read the application base, nothing else PermissionSet restrictedPerms = new PermissionSet(PermissionState.None); restrictedPerms.AddPermission( new SecurityPermission(SecurityPermissionFlag.Execution)); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.Read, appDomainSetup.ApplicationBase); restrictedPerms.AddPermission( new FileIOPermission(FileIOPermissionAccess.pathDiscovery, appDomainSetup.ApplicationBase); // create the sandbox AppDomain sandbox = AppDomain.CreateDomain("Sandbox", null, appDomainSetup, restrictedPerms); // execute UntrustedPlugin in the sandbox // don't crash the application if the sandbox throws an exception IPlugin o = (IPlugin)sandbox.CreateInstanceFromAndUnwrap("Sandboxed.dll", "UntrustedPlugin"); try { o.MethodToDoThings() } catch (Exception e) { Console.WriteLine(e.ToString()); } And the result? Oops. We’ve allowed a class that should be sandboxed to execute code with fully-trusted permissions! How did this happen? Well, the key is the exact meaning of the ApplicationBase property: The application base directory is where the assembly manager begins probing for assemblies. When EvilException is thrown, it propagates from the sandboxed appdomain into the controlling assembly’s appdomain (as it’s marked as Serializable). When the exception is deserialized, the CLR finds and loads the sandboxed dll into the fully-trusted appdomain. Since the controlling appdomain’s ApplicationBase directory contains the sandboxed assembly, the CLR finds and loads the assembly into a full-trust appdomain, and the evil code is executed. So the problem isn’t exactly that the sandboxed appdomain’s ApplicationBase is the same as the controlling appdomain’s, it’s that the sandboxed dll was in such a place that the controlling appdomain could find it as part of the standard assembly resolution mechanism. The sandbox then forced the assembly to load in the controlling appdomain by throwing a serializable exception that propagated outside the sandbox. The easiest fix for this is to keep the sandbox ApplicationBase well away from the ApplicationBase of the controlling appdomain, and don’t allow the sandbox permissions to access the controlling appdomain’s ApplicationBase directory. If you do this, then the sandboxed assembly can’t be accidentally loaded into the fully-trusted appdomain, and the code can’t be executed. If the plugin does try to induce the controlling appdomain to load an assembly it shouldn’t, a SerializationException will be thrown when it tries to load the assembly to deserialize the exception, and no damage will be done. 2. Loading the sandboxed dll into the application appdomain As an extension of the previous point, you shouldn’t directly reference types or methods in the sandboxed dll from your application code. That loads the assembly into the fully-trusted appdomain, and from there code in the assembly could be executed. Instead, pull out methods you want the sandboxed dll to have into an interface or class in a partially-trusted assembly you control, and execute methods via that instead (similar to the example above with the IPlugin interface). If you need to have a look at the assembly before executing it in the sandbox, either examine the assembly using reflection from within the sandbox, or load the assembly into the Reflection-only context in the application’s appdomain. The code in assemblies in the reflection-only context can’t be executed, it can only be reflected upon, thus protecting your appdomain from malicious code. 3. Incorrectly asserting permissions You should only assert permissions when you are absolutely sure they’re safe. For example, this method allows a caller read-access to any file they call this method with, including your documents, any network shares, the C:\Windows directory, etc: [SecuritySafeCritical] public static string GetFileText(string filePath) { new FileIOPermission(FileIOPermissionAccess.Read, filePath).Assert(); return File.ReadAllText(filePath); } Be careful when asserting permissions, and ensure you’re not providing a loophole sandboxed dlls can use to gain access to things they shouldn’t be able to. Conclusion Hopefully, that’s given you an idea of some of the ways it’s possible to get past the .NET security system. As I said before, this post is not exhaustive, and you certainly shouldn’t base any security-critical applications on the contents of this blog post. What this series should help with is understanding the possibilities of the security system, and what all the security attributes and classes mean and what they are used for, if you were to use the security system in the future.

    Read the article

  • .NET template class instance - passing a variable data type

    - by FerretallicA
    As the title suggests, I'm tyring to pass a variable data type to a template class. Something like this: frmExample = New LookupForm(Of Models.MyClass) 'Works fine Dim SelectedType As Type = InstanceOfMyClass.GetType() 'Works fine frmExample = New LookupForm(Of SelectedType) 'Ba-bow! frmExample = New LookupForm(Of InstanceOfMyClass.GetType()) 'Ba-bow! LookupForm<Models.MyClass> frmExample; Type SelectedType = InstanceOfMyClass.GetType(); frmExample = new LookupForm<SelectedType.GetType()>(); //Ba-bow frmExample = new LookupForm<(Type)SelectedType>(); //Ba-bow I'm assuming it's something to do with the template being processed at compile time but even if I'm off the mark there, it wouldn't solve my problem anyway. I can't find any relevant information on using Reflection to instance template classes either. (How) can I create an instance of a dynamically typed repository at runtime?

    Read the article

  • No persister for: <ClassName> issue with Fluent NHibernate

    - by Amit
    I have following code: //AutoMapConfig.cs using System; using FluentNHibernate.Automapping; namespace SimpleFNH.AutoMap { public class AutoMapConfig : DefaultAutomappingConfiguration { public override bool ShouldMap(Type type) { return type.Namespace == "Examples.FirstAutomappedProject.Entities"; } } } //CascadeConvention.cs using FluentNHibernate.Conventions; using FluentNHibernate.Conventions.Instances; namespace SimpleFNH.AutoMap { public class CascadeConvention : IReferenceConvention, IHasManyConvention, IHasManyToManyConvention { public void Apply(IManyToOneInstance instance) { instance.Cascade.All(); } public void Apply(IOneToManyCollectionInstance instance) { instance.Cascade.All(); } public void Apply(IManyToManyCollectionInstance instance) { instance.Cascade.All(); } } } //Item.cs namespace SimpleFNH.Entities { public class Item { public virtual long ID { get; set; } public virtual string ItemName { get; set; } public virtual string Description { get; set; } public virtual OrderItem OrderItem { get; set; } } } //OrderItem.cs namespace SimpleFNH.Entities { public class OrderItem { public virtual long ID { get; set; } public virtual int Quantity { get; set; } public virtual Item Item { get; set; } public virtual ProductOrder ProductOrder { get; set; } public virtual void AddItem(Item item) { item.OrderItem = this; } } } using System; using System.Collections.Generic; //ProductOrder.cs namespace SimpleFNH.Entities { public class ProductOrder { public virtual long ID { get; set; } public virtual DateTime OrderDate { get; set; } public virtual string CustomerName { get; set; } public virtual IList<OrderItem> OrderItems { get; set; } public ProductOrder() { OrderItems = new List<OrderItem>(); } public virtual void AddOrderItems(params OrderItem[] items) { foreach (var item in items) { OrderItems.Add(item); item.ProductOrder = this; } } } } //NHibernateRepo.cs using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using NHibernate; using NHibernate.Criterion; using NHibernate.Tool.hbm2ddl; namespace SimpleFNH.Repository { public class NHibernateRepo { private static ISessionFactory _sessionFactory; private static ISessionFactory SessionFactory { get { if (_sessionFactory == null) InitializeSessionFactory(); return _sessionFactory; } } private static void InitializeSessionFactory() { _sessionFactory = Fluently.Configure().Database( MsSqlConfiguration.MsSql2008.ConnectionString( @"server=Amit-PC\SQLEXPRESS;database=SimpleFNH;Trusted_Connection=True;").ShowSql()). Mappings(m => m.FluentMappings.AddFromAssemblyOf<Order>()).ExposeConfiguration( cfg => new SchemaExport(cfg).Create(true, true)).BuildSessionFactory(); } public static ISession OpenSession() { return SessionFactory.OpenSession(); } } } //Program.cs using System; using System.Collections.Generic; using System.Linq; using SimpleFNH.Entities; using SimpleFNH.Repository; namespace SimpleFNH { class Program { static void Main(string[] args) { using (var session = NHibernateRepo.OpenSession()) { using (var transaction = session.BeginTransaction()) { var item1 = new Item { ItemName = "item 1", Description = "test 1" }; var item2 = new Item { ItemName = "item 2", Description = "test 2" }; var item3 = new Item { ItemName = "item 3", Description = "test 3" }; var orderItem1 = new OrderItem { Item = item1, Quantity = 2 }; var orderItem2 = new OrderItem { Item = item2, Quantity = 4 }; var orderItem3 = new OrderItem { Item = item3, Quantity = 5 }; var productOrder = new ProductOrder { CustomerName = "Amit", OrderDate = DateTime.Now, OrderItems = new List<OrderItem> { orderItem1, orderItem2, orderItem3 } }; productOrder.AddOrderItems(orderItem1, orderItem2, orderItem3); session.Save(productOrder); transaction.Commit(); } } using (var session = NHibernateRepo.OpenSession()) { // retreive all stores and display them using (session.BeginTransaction()) { var orders = session.CreateCriteria(typeof(ProductOrder)) .List<ProductOrder>(); foreach (var item in orders) { Console.WriteLine(item.OrderItems.First().Quantity); } } } } } } I tried many variations to get it working but i get an error saying No persister for: SimpleFNH.Entities.ProductOrder Can someone help me get it working? I wanted to create a simple program which will set a pattern for my bigger project but it is taking quite a lot of time than expected. It would be rally helpful if you can explain in simple terms on any template/pattern that i can use to get fluent nHibernate working. The above code uses auto mapping, which i tried after i tried with fluent mapping.

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >