Search Results

Search found 12585 results on 504 pages for 'vs 2013 preview'.

Page 487/504 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Best of both worlds: browser and desktop game?

    - by Ricket
    When considering a platform for a game, I've decided on multi-platform (Win/Lin/Mac) but can't make up my mind as far as browser vs. desktop. As I'm not all too far in development, and now having second thoughts, I'd like your opinion! Browser-based games using Java applets: market penetration is reasonably high (for version 6, it's somewhere around 60% I believe?) using JOGL, 3D performance/quality is decent; certainly good enough to render the crappy 3D graphics that I make there's the (small?) possibility of porting something to Android great for an audience of gamers who switch computers often; can sit down at any computer, load a webpage and play it also great for casual gamers or less knowledgeable gamers who are quite happy with playing games in a browser but don't want to install more things to their computer written in a high-level language which I am more familiar with than C++ - but at the same time, I would like to improve my skills with C++ as it is probably where I am headed in the game industry once I get out of school... easier update process: reload the page. Desktop games using good ol' C++ and OpenGL 100% market penetration, assuming complete cross-platform; however, that number reduces when you consider how many people will go through downloading and installing an executable compared to just browsing to a webpage and hitting "yes" to a security warning. more trouble to maintain the cross-platform; but again, for learning purposes I would embrace the challenge and the knowledge I would gain better performance all around true full screen, whereas browser games often struggle with smooth full screen graphics (especially on Linux, in my experience) can take advantage of distribution platforms such as Steam more likely to be considered a "real" game, whereas browser and Java games are often dismissed as not being real games and therefore not played by "hardcore gamers" installer can be large; don't have to worry so much about download times Is there a way to have the best of both worlds? I love Java applets, but I also really like the reasons to write a desktop game. I don't want to constantly port everything between a Java applet project and a C++ project; that would be twice the work! Unity chose to write their own web player plugin. I don't like this, because I am one of the people that will not install their web player for anything, and I don't see myself being able to convince my audience to install a browser plugin. What are my options? Are there other examples out there besides Unity, of games that have browser and desktop versions? Did I leave out anything in the pro/con lists above?

    Read the article

  • Still failing a function, not sure why...ideas on test cases to run?

    - by igor
    I've been trying to get this Sudoku game working, and I am still failing some of the individual functions. All together the game works, but when I run it through an "autograder", some test cases fail.. Currently I am stuck on the following function, placeValue, failing. I do have the output that I get vs. what the correct one should be, but am confused..what is something going on? EDIT: I do not know what input/calls they make to the function. What happens is that "invalid row" is outputted after every placeValue call, and I can't trace why.. Here is the output (mine + correct one) if it's at all helpful: http://pastebin.com/Wd3P3nDA Here is placeValue, and following is getCoords that placeValue calls.. void placeValue(Square board[BOARD_SIZE][BOARD_SIZE]) { int x,y,value; if(getCoords(x,y)) { cin>>value; if(board[x][y].permanent) { cout<< endl << "That location cannot be changed"; } else if(!(value>=1 && value<=9)) { cout << "Invalid number"<< endl; clearInput(); } else if(validMove(board, x, y, value)) { board[x][y].number=value; } } } bool getCoords(int & x, int & y) { char row; y=0; cin>>row>>y; x = static_cast<int>(toupper(row)); if (isalpha(row) && (x >= 'A' && x <= 'I') && y >= 1 && y <= 9) { x = x - 'A'; // converts x from a letter to corresponding index in matrix y = y - 1; // converts y to corresponding index in matrix return (true); } else if (!(x >= 'A' && x <= 'I')) { cout<<"Invalid row"<<endl; clearInput(); return false; } else { cout<<"Invalid column"<<endl; clearInput(); return false; } }

    Read the article

  • Performance issues with repeatable loops as control part

    - by djerry
    Hey guys, In my application, i need to show made calls to the user. The user can arrange some filters, according to what they want to see. The problem is that i find it quite hard to filter the calls without losing performance. This is what i am using now : private void ProcessFilterChoice() { _filteredCalls = ServiceConnector.ServiceConnector.SingletonServiceConnector.Proxy.GetAllCalls().ToList(); if (cboOutgoingIncoming.SelectedIndex > -1) GetFilterPartOutgoingIncoming(); if (cboInternExtern.SelectedIndex > -1) GetFilterPartInternExtern(); if (cboDateFilter.SelectedIndex > -1) GetFilteredCallsByDate(); wbPdf.Source = null; btnPrint.Content = "Pdf preview"; } private void GetFilterPartOutgoingIncoming() { if (cboOutgoingIncoming.SelectedItem.ToString().Equals("Outgoing")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Caller.E164.Length > 4 || _filteredCalls[i].Caller.E164.Equals("0")) _filteredCalls.RemoveAt(i); } else if (cboOutgoingIncoming.SelectedItem.ToString().Equals("Incoming")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Called.E164.Length > 4 || _filteredCalls[i].Called.E164.Equals("0")) _filteredCalls.RemoveAt(i); } } private void GetFilterPartInternExtern() { if (cboInternExtern.SelectedItem.ToString().Equals("Intern")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Called.E164.Length > 4 || _filteredCalls[i].Caller.E164.Length > 4 || _filteredCalls[i].Caller.E164.Equals("0")) _filteredCalls.RemoveAt(i); } else if (cboInternExtern.SelectedItem.ToString().Equals("Extern")) for (int i = _filteredCalls.Count - 1; i > -1; i--) { if ((_filteredCalls[i].Called.E164.Length < 5 && _filteredCalls[i].Caller.E164.Length < 5) || _filteredCalls[i].Called.E164.Equals("0")) _filteredCalls.RemoveAt(i); } } private void GetFilteredCallsByDate() { DateTime period = DateTime.Now; switch (cboDateFilter.SelectedItem.ToString()) { case "Today": period = DateTime.Today; break; case "Last week": period = DateTime.Today.Subtract(new TimeSpan(7, 0, 0, 0)); break; case "Last month": period = DateTime.Today.AddMonths(-1); break; case "Last year": period = DateTime.Today.AddYears(-1); break; default: return; } for (int i = _filteredCalls.Count - 1; i > -1; i--) { if (_filteredCalls[i].Start < period) _filteredCalls.RemoveAt(i); } } _filtered calls is a list of "calls". Calls is a class that looks like this : [DataContract] public class Call { private User caller, called; private DateTime start, end; private string conferenceId; private int id; private bool isNew = false; [DataMember] public bool IsNew { get { return isNew; } set { isNew = value; } } [DataMember] public int Id { get { return id; } set { id = value; } } [DataMember] public string ConferenceId { get { return conferenceId; } set { conferenceId = value; } } [DataMember] public DateTime End { get { return end; } set { end = value; } } [DataMember] public DateTime Start { get { return start; } set { start = value; } } [DataMember] public User Called { get { return called; } set { called = value; } } [DataMember] public User Caller { get { return caller; } set { caller = value; } } Can anyone direct me to a better solution or make some suggestions.

    Read the article

  • How to obtain the first cluster of the directory's data in FAT using C# (or at least C++) and Win32A

    - by DarkWalker
    So I have a FAT drive, lets say H: and a directory 'work' (full path 'H:\work'). I need to get the NUMBER of the first cluster of that directory. The number of the first cluster is 2-bytes value, that is stored in the 26th and 27th bytes of the folder enty (wich is 32 bytes). Lets say I am doing it with file, NOT a directory. I can use code like this: static public string GetDirectoryPtr(string dir) { IntPtr ptr = CreateFile(@"H:\Work\dover.docx", GENERIC_READ, FILE_SHARE_READ | FILE_SHARE_WRITE, IntPtr.Zero, OPEN_EXISTING, 0,//FILE_FLAG_BACKUP_SEMANTICS, IntPtr.Zero); try { const uint bytesToRead = 2; byte[] readbuffer = new byte[bytesToRead]; if (ptr.ToInt32() == -1) return String.Format("Error: cannot open direcotory {0}", dir); if (SetFilePointer(ptr, 26, 0, 0) == -1) return String.Format("Error: unable to set file pointer on file {0}", ptr); uint read = 0; // real count of read bytes if (!ReadFile(ptr, readbuffer, bytesToRead, out read, 0)) return String.Format("cant read from file {0}. Error #{1}", ptr, Marshal.GetLastWin32Error()); int result = readbuffer[0] + 16 * 16 * readbuffer[1]; return result.ToString();//ASCIIEncoding.ASCII.GetString(readbuffer); } finally { CloseHandle(ptr); } } And it will return some number, like 19 (quite real to me, this is the only file on the disk). But I DONT need a file, I need a folder. So I am puttin FILE_FLAG_BACKUP_SEMANTICS param for CreateFile call... and dont know what to do next =) msdn is very clear on this issue http://msdn.microsoft.com/en-us/library/aa365258(v=VS.85).aspx It sounds to me like: "There is no way you can get a number of the folder's first cluster". The most desperate thing is that my tutor said smth like "You are going to obtain this or you wont pass this course". The true reason why he is so sure this is possible is because for 10 years (or may be more) he recieved the folder's first cluster number as a HASH of the folder's addres (and I was stupid enough to point this to him, so now I cant do it the same way) PS: This is the most spupid task I have ever had!!! This value is not really used anythere in program, it is only fcking pointless integer.

    Read the article

  • zlib gzgets extremely slow?

    - by monkeyking
    I'm doing stuff related to parsing huge globs of textfiles, and was testing what input method to use. There is not much of a difference using c++ std::ifstreams vs c FILE, According to the documentation of zlib, it supports uncompressed files, and will read the file without decompression. I'm seeing a difference from 12 seconds using non zlib to more than 4 minutes using zlib.h This I've tested doing multiple runs, so its not a disk cache issue. Am I using zlib in some wrong way? thanks #include <zlib.h> #include <cstdio> #include <cstdlib> #include <fstream> #define LENS 1000000 size_t fg(const char *fname){ fprintf(stderr,"\t-> using fgets\n"); FILE *fp =fopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(NULL!=fgets(buffer,LENS,fp)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t is(const char *fname){ fprintf(stderr,"\t-> using ifstream\n"); std::ifstream is(fname,std::ios::in); size_t nLines =0; char *buffer = new char[LENS]; while(is. getline(buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } size_t iz(const char *fname){ fprintf(stderr,"\t-> using zlib\n"); gzFile fp =gzopen(fname,"r"); size_t nLines =0; char *buffer = new char[LENS]; while(0!=gzgets(fp,buffer,LENS)) nLines++; fprintf(stderr,"%lu\n",nLines); return nLines; } int main(int argc,char**argv){ if(atoi(argv[2])==0) fg(argv[1]); if(atoi(argv[2])==1) is(argv[1]); if(atoi(argv[2])==2) iz(argv[1]); }

    Read the article

  • All parts of my Printable Swing component doesn't print

    - by Jonas
    I'm trying to do a printable component (an invoice document). I use JComponent instead of JPanel because I don't want a background. The component has many subcomponents. The main component implements Printable and has a print-method that is calling printAll(g) so that all subcomponents should be printed. But my subcomponents doesn't print. What am I missing? Does all subcomponents also has to implement Printable? import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridLayout; import java.awt.print.PageFormat; import java.awt.print.Printable; import java.awt.print.PrinterException; import java.awt.print.PrinterJob; import javax.swing.JComponent; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JTextField; public class PPanel extends JComponent implements Printable { static double w; static double h; public PPanel() { this.setLayout(new BorderLayout()); this.add(new JLabel("Document Body"), BorderLayout.CENTER); this.add(new Header(), BorderLayout.NORTH); this.add(new Footer(), BorderLayout.SOUTH); } class Header extends JComponent { public Header() { this.setLayout(new BorderLayout()); this.add(new TopHeader(), BorderLayout.NORTH); this.add(new LowHeader(), BorderLayout.SOUTH); } } class TopHeader extends JComponent { public TopHeader() { this.setLayout(new BorderLayout()); JLabel companyName = new JLabel("Company name"); JLabel docType = new JLabel("Document type"); this.add(companyName, BorderLayout.WEST); this.add(docType, BorderLayout.EAST); } } class LowHeader extends JComponent { public LowHeader() { this.setLayout(new GridLayout(0,2)); JLabel col1 = new JLabel("Column 1"); JLabel col2 = new JLabel("Column 2"); this.add(col1); this.add(col2); } } class Footer extends JComponent { public Footer() { this.setLayout(new GridLayout(0,2)); JLabel addr = new JLabel("Address"); JLabel sum = new JLabel("Sum"); this.add(addr); this.add(sum); } } public static void main(String[] args) { final PPanel p = new PPanel(); PrinterJob job = PrinterJob.getPrinterJob(); job.setPrintable(p); try { job.print(); } catch (PrinterException ex) { // print failed } // Preview new JFrame() {{ getContentPane().add(p); this.setSize((int)w, (int)h); setVisible(true); }}; } @Override public int print(Graphics g, PageFormat pf, int page) throws PrinterException { if (page > 0) { return NO_SUCH_PAGE; } Graphics2D g2d = (Graphics2D)g; g2d.translate(pf.getImageableX(), pf.getImageableY()); w = pf.getImageableWidth(); h = pf.getHeight(); this.setSize((int)w, (int)h); this.setPreferredSize(new Dimension((int)w, (int)h)); this.doLayout(); this.printAll(g); return PAGE_EXISTS; } }

    Read the article

  • OpenGL Shader Compile Error

    - by Tomas Cokis
    I'm having a bit of a problem with my code for compiling shaders, namely they both register as failed compiles and no log is received. This is the shader compiling code: /* Make the shader */ Uint size; GLchar* file; loadFileRaw(filePath, file, &size); const char * pFile = file; const GLint pSize = size; newCashe.shader = glCreateShader(shaderType); glShaderSource(newCashe.shader, 1, &pFile, &pSize); glCompileShader(newCashe.shader); GLint shaderCompiled; glGetShaderiv(newCashe.shader, GL_COMPILE_STATUS, &shaderCompiled); if(shaderCompiled == GL_FALSE) { ReportFiler->makeReport("ShaderCasher.cpp", "loadShader()", "Shader did not compile", "The shader " + filePath + " failed to compile, reporting the error - " + OpenGLServices::getShaderLog(newCashe.shader)); } And these are the support functions: bool loadFileRaw(string fileName, char* data, Uint* size) { if (fileName != "") { FILE *file = fopen(fileName.c_str(), "rt"); if (file != NULL) { fseek(file, 0, SEEK_END); *size = ftell(file); rewind(file); if (*size > 0) { data = (char*)malloc(sizeof(char) * (*size + 1)); *size = fread(data, sizeof(char), *size, file); data[*size] = '\0'; } fclose(file); } } return data; } string OpenGLServices::getShaderLog(GLuint obj) { int infologLength = 0; int charsWritten = 0; char *infoLog; glGetShaderiv(obj, GL_INFO_LOG_LENGTH,&infologLength); if (infologLength > 0) { infoLog = (char *)malloc(infologLength); glGetShaderInfoLog(obj, infologLength, &charsWritten, infoLog); string log = infoLog; free(infoLog); return log; } return "<Blank Log>"; } and the shaders I'm loading: void main(void) { gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); } void main(void) { gl_Position = ftransform(); } In short I get From: ShaderCasher.cpp, In: loadShader(), Subject: Shader did not compile Message: The shader Data/Shaders/Standard/standard.vs failed to compile, reporting the error - <Blank Log> for every shader I compile I've tried replacing the file reading with just a hard coded string but I get the same error so there must be something wrong with how I'm compiling them. I have run and compiled example programs with shaders, so I doubt my drivers are the issue, but in any case I'm on a Nvidia 8600m GT. Can anyone help?

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • How to manipulate file paths intelligently in .Net 3.0?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to sucks less? Thanks!

    Read the article

  • Can an asynchronously fired event run synchronously on a form?

    - by cyclotis04
    [VS 2010 Beta with .Net Framework 3.5] I've written a C# component to asynchronously monitor a socket and raise events when data is received. I set the VB form to show message boxes when the event is raised. What I've noticed is that when the component raises the event synchronously, the message box blocks the component code and locks the form until the user closes the message. When it's raised asynchronously, it neither blocks the code, nor locks the form. What I want is a way to raise an event in such a way that it does not block the code, but is called on the same thread as the form (so that it locks the form until the user selects an option.) Can you help me out? Thanks. [Component] using System; using System.Threading; using System.ComponentModel; namespace mySpace { public delegate void SyncEventHandler(object sender, SyncEventArgs e); public delegate void AsyncEventHandler(object sender, AsyncEventArgs e); public class myClass { readonly object syncEventLock = new object(); readonly object asyncEventLock = new object(); SyncEventHandler syncEvent; AsyncEventHandler asyncEvent; private delegate void WorkerDelegate(string strParam, int intParam); public void DoWork(string strParam, int intParam) { OnSyncEvent(new SyncEventArgs()); AsyncOperation asyncOp = AsyncOperationManager.CreateOperation(null); WorkerDelegate delWorker = new WorkerDelegate(ClientWorker); IAsyncResult result = delWorker.BeginInvoke(strParam, intParam, null, null); } private void ClientWorker(string strParam, int intParam) { Thread.Sleep(2000); OnAsyncEvent(new AsyncEventArgs()); OnAsyncEvent(new AsyncEventArgs()); } public event SyncEventHandler SyncEvent { add { lock (syncEventLock) syncEvent += value; } remove { lock (syncEventLock) syncEvent -= value; } } public event AsyncEventHandler AsyncEvent { add { lock (asyncEventLock) asyncEvent += value; } remove { lock (asyncEventLock) asyncEvent -= value; } } protected void OnSyncEvent(SyncEventArgs e) { SyncEventHandler handler; lock (syncEventLock) handler = syncEvent; if (handler != null) handler(this, e, null, null); // Blocks and locks //if (handler != null) handler.BeginInvoke(this, e, null, null); // Neither blocks nor locks } protected void OnAsyncEvent(AsyncEventArgs e) { AsyncEventHandler handler; lock (asyncEventLock) handler = asyncEvent; //if (handler != null) handler(this, e, null, null); // Blocks and locks if (handler != null) handler.BeginInvoke(this, e, null, null); // Neither blocks nor locks } } } [Form] Imports mySpace Public Class Form1 Public WithEvents component As New mySpace.myClass() Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click component.DoWork("String", 1) End Sub Private Sub component_SyncEvent(ByVal sender As Object, ByVal e As pbxapi.SyncEventArgs) Handles component.SyncEvent MessageBox.Show("Synchronous event", "Raised:", MessageBoxButtons.OK) End Sub Private Sub component_AsyncEvent(ByVal sender As Object, ByVal e As pbxapi.AsyncEventArgs) Handles component.AsyncEvent MessageBox.Show("Asynchronous event", "Raised:", MessageBoxButtons.OK) End Sub End Class

    Read the article

  • Thread sleep and thread join.

    - by Dhruv Gairola
    hi guys, if i put a thread to sleep in a loop, netbeans gives me a caution saying Invoking Thread.sleep in loop can cause performance problems. However, if i were to replace the sleep with join, no such caution is given. Both versions compile and work fine tho. My code is below (check the last few lines for "Thread.sleep() vs t.join()"). public class Test{ //Display a message, preceded by the name of the current thread static void threadMessage(String message) { String threadName = Thread.currentThread().getName(); System.out.format("%s: %s%n", threadName, message); } private static class MessageLoop implements Runnable { public void run() { String importantInfo[] = { "Mares eat oats", "Does eat oats", "Little lambs eat ivy", "A kid will eat ivy too" }; try { for (int i = 0; i < importantInfo.length; i++) { //Pause for 4 seconds Thread.sleep(4000); //Print a message threadMessage(importantInfo[i]); } } catch (InterruptedException e) { threadMessage("I wasn't done!"); } } } public static void main(String args[]) throws InterruptedException { //Delay, in milliseconds before we interrupt MessageLoop //thread (default one hour). long patience = 1000 * 60 * 60; //If command line argument present, gives patience in seconds. if (args.length > 0) { try { patience = Long.parseLong(args[0]) * 1000; } catch (NumberFormatException e) { System.err.println("Argument must be an integer."); System.exit(1); } } threadMessage("Starting MessageLoop thread"); long startTime = System.currentTimeMillis(); Thread t = new Thread(new MessageLoop()); t.start(); threadMessage("Waiting for MessageLoop thread to finish"); //loop until MessageLoop thread exits while (t.isAlive()) { threadMessage("Still waiting..."); //Wait maximum of 1 second for MessageLoop thread to //finish. /*******LOOK HERE**********************/ Thread.sleep(1000);//issues caution unlike t.join(1000) /**************************************/ if (((System.currentTimeMillis() - startTime) > patience) && t.isAlive()) { threadMessage("Tired of waiting!"); t.interrupt(); //Shouldn't be long now -- wait indefinitely t.join(); } } threadMessage("Finally!"); } } As i understand it, join waits for the other thread to complete, but in this case, arent both sleep and join doing the same thing? Then why does netbeans throw the caution?

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • Sync services not actually syncing

    - by Paul Mrozowski
    I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data. I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs. Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict. Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync: var agent = new FSEMobileCacheSyncAgent(); var syncStats = agent.Synchronize(); syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there. I basically followed the instructions I found here: http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx and here: http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now. I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference. Ideas? Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason? This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.

    Read the article

  • When to delete newly deprecated code?

    - by John
    I spent a month writing an elaborate payment system that handles both credit card payments and electronic fund transfers. My work was used on production server for about a month. I was told recently by the client that he no longer wants to use the electronic fund transfer feature. Because the way I had to interface and communicate with the credit card gateway is drastically different from the electronic fund transfer api (eg. the cc company gives transaction responses immediately after an http request, while the eft company gives transaction responses 5 business days after an http request), I spent a lot of time writing my own API to abstract common function calls like function payment(amount, pay_method,pay_freq) function updateRecurringSchedule(user_id,new_schedule) etc.. Now that the client wants to abandon the EFT feature, all my work for this abstracted payments API is obsolete. I'm deliberating over whether I should scrap my work. Here's my pro vs. con for scrapping it now: PRO 1: Eliminate code bloat PRO 2: New developers do not need to learn MY API. They only need to read the CC company's API PRO 3: Because the EFT company did not handle recurring payment schedules, refunds, and validation, I wrote my own application to do it. Although the CC company's API permitted this functionality, I opted to use mine instead so that I could streamline my code. now that EFT is out of the picture, I can delete all this confusing code and just rely on the CC company's sytsem to manage recurring billing, payment schedules, refunds, validations etc... CON 1: Although I can just delete the EFT code, it still takes time to remove the entire framework consolidates different payment systems. CON 2: with regards to PRO 3, it takes time to build functionality that integrates the payment system more closely with the CC company. CON 3: I feel insecure deleting all this work. I don't think I'll ever use it again. But, for some inexplicable reason, I just don't feel comfortable deleting this work "right now". So my question is, should I delete one month's worth recent development? If yes, should I do it immediately or wait X amount of time before doing so?

    Read the article

  • How to find if a Item in a ListBox has the focus?

    - by eitan barazani
    I have a List box defined like this: <ListBox x:Name="EmailList" ItemsSource="{Binding MailBoxManager.Inbox.EmailList}" SelectedItem="{Binding SelectedMessage, Mode=TwoWay}" Grid.Row="1"> <ListBox.ItemTemplate> <DataTemplate> <usrctrls:MessageSummary /> </DataTemplate> </ListBox.ItemTemplate> </ListBox> The UserControl is defined like this: <UserControl x:Class="UserControls.MessageSummary" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="600"> <UserControl.Resources> </UserControl.Resources> <Grid HorizontalAlignment="Left"> <Grid.ColumnDefinitions> <ColumnDefinition Width="50" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <CheckBox Grid.Column="0" VerticalAlignment="Center" /> <Grid Grid.Column="1" Margin="0,0,12,0"> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <Grid Grid.Row="0" Grid.Column="0" HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="30" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="80" /> <ColumnDefinition Width="80" /> </Grid.ColumnDefinitions> <Image x:Name="FlaggedImage" Grid.Column="0" Width="20" Height="10" Margin="0" VerticalAlignment="Center" HorizontalAlignment="Center" Source="/Assets/ico_flagged_white.png" /> <TextBlock x:Name="Sender" Grid.Column="1" Text="{Binding EmailProperties.DisplayFrom}" Style="{StaticResource TextBlock_SenderRowTitle}" HorizontalAlignment="Left" VerticalAlignment="Center" /> <Grid x:Name="ImagesContainer" Grid.Column="2" VerticalAlignment="Center"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Image x:Name="ImgImportant" Grid.Column="0" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_important_red.png" /> <Image x:Name="ImgFolders" Grid.Column="1" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_ico_addtofolder.png" /> <Image x:Name="ImgAttachment" Grid.Column="2" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_attachment_lightgray.png" /> <Image x:Name="ImgFlag" Grid.Column="3" Width="20" Height="20" VerticalAlignment="Center" HorizontalAlignment="Center" Source="ms-appx:///Assets/ico_flag.png" /> </Grid> <TextBlock x:Name="Time" Grid.Column="3" Text="{Binding EmailProperties.DateReceived, Converter={StaticResource EmailHeaderTimeConverter}}" TextAlignment="Center" FontSize="16" VerticalAlignment="Center" Margin="0" /> </Grid> <TextBlock Grid.Row="1" Text="{Binding EmailProperties.Subject}" TextTrimming="WordEllipsis" Margin="0,10" /> <TextBlock Grid.Row="2" Text="{Binding EmailProperties.Preview}" TextTrimming="WordEllipsis" /> </Grid> </Grid> The MessageSummary is a UserControl. I would like to bind the foreground color of the Items of the ListBox to whether the item is the one selected in the list box, i.e. I would like the Item's foreground color to be Black if not selected and White if the item is selected. How can it be done? Thanks,

    Read the article

  • How to manipulate file paths intelligently in .Net 3.5?

    - by Hamish Grubijan
    Scenario: I am maintaining a function which helps with an install - copies files from PathPart1/pending_install/PathPart2/fileName to PathPart1/PathPart2/fileName. It seems that String.Replace() and Path.Combine() do not play well together. The code is below. I added this section: // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); in order to take care of a bug (code is sensitive to a constant @"pending_install\" vs @"pending_install" which I did not like and changed (long story, but there was a good opportunity for constant reuse). Now the whole function: //You want to uncompress only the files downloaded. Not every file in the dest directory. private void UncompressFiles() { string strSrcDir = _application.Client.TempDir; ArrayList arrFiles = new ArrayList(); GetAllCompressedFiles(ref arrFiles, strSrcDir); IEnumerator enumer = arrFiles.GetEnumerator(); while (enumer.MoveNext()) { string strDestFile = enumer.Current.ToString().Replace(_application.Client.TempDir, String.Empty); // The behavior of Path.Combine is weird. See: // http://stackoverflow.com/questions/53102/why-does-path-combine-not-properly-concatenate-filenames-that-start-with-path-dir while (strDestFile.StartsWith(@"\"")) { strDestFile = strDestFile.Substring(1); // Remove any leading backslashes } Debug.Assert(!Path.IsPathRooted(strDestFile), "This will make the Path.Combine(,) fail)."); strDestFile = Path.Combine(_application.Client.BaseDir, strDestFile); strDestFile = strDestFile.Replace(Path.GetExtension(strDestFile), String.Empty); ZSharpLib.ZipExtractor.ExtractZip(enumer.Current.ToString(), strDestFile); FileUtility.DeleteFile(enumer.Current.ToString()); } } Please do not laugh at the use of ArrayList and the way it is being iterated - it was pioneered by a C++ coder during a .Net 1.1 era. I will change it. What I am interested in: what is a better way of replacing PathPart1/pending_install/PathPart2/fileName with PathPart1/PathPart2/fileName within the current code. Note that _application.Client.TempDir is just _application.Client.BaseDir + @"\pending_install". While there are many ways to improve the code, I am mainly concerned with the part which has to do with String.Replace(...) and Path.Combine(,). I do not want to make changes outside of this function. I wish Path.Combine(,) took an optional bool flag, but it does not. So ... given my constraints, how can I rework this so that it starts to suck less?

    Read the article

  • CultureManager issue

    - by Serge
    I have a bug I don't understand. While the following works fine: Resources.Classes.AFieldFormula.DirectFieldFormula this one throws an exception: new ResourceManager(typeof(Resources.Classes.AFieldFormula)).GetString("DirectFieldFormula"); Could not find any resources appropriate for the specified culture or the neutral culture. Make sure \"Resources.Classes.AFieldFormula.resources\" was correctly embedded or linked into assembly \"MygLogWeb\" at compile time, or that all the satellite assemblies required are loadable and fully signed. How comes? Resource designer.cs file: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.18408 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace Resources.Classes { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] public class AFieldFormula { private static global::System.Resources.ResourceManager resourceMan; private static global::System.Globalization.CultureInfo resourceCulture; [global::System.Diagnostics.CodeAnalysis.SuppressMessageAttribute("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")] internal AFieldFormula() { } /// <summary> /// Returns the cached ResourceManager instance used by this class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Resources.ResourceManager ResourceManager { get { if (object.ReferenceEquals(resourceMan, null)) { global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MygLogWeb.Classes.AFieldFormula", typeof(AFieldFormula).Assembly); resourceMan = temp; } return resourceMan; } } /// <summary> /// Overrides the current thread's CurrentUICulture property for all /// resource lookups using this strongly typed resource class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Globalization.CultureInfo Culture { get { return resourceCulture; } set { resourceCulture = value; } } /// <summary> /// Looks up a localized string similar to Direct field. /// </summary> public static string DirectFieldFormula { get { return ResourceManager.GetString("DirectFieldFormula", resourceCulture); } } } }

    Read the article

  • What does Apache need to support both mysqli and PDO?

    - by Nathan Long
    I'm considering changing some PHP code to use PDO for database access instead of mysqli (because the PDO syntax makes more sense to me and is database-agnostic). To do that, I'd need both methods to work while I'm making the changeover. My problem is this: so far, either one or the other method will crash Apache. Right now I'm using XAMPP in Windows XP, and PHP Version 5.2.8. Mysqli works fine, and so does this: $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database'; $sql = "SELECT * FROM `employee`"; But this line makes Apache crash: $dbc->query($sql); I don't want to redo my entire Apache or XAMPP installation, but I'd like for PDO to work. So I tried updating libmysql.dll from here, as oddvibes recommended here. That made my simple PDO query work, but then mysqli queries crashed Apache. (I also tried the suggestion after that one, to update php_pdo_mysql.dll and php_pdo.dll, to no effect.) Test Case I created this test script to compare PDO vs mysqli. With the old copy of libmysql.dll, it crashes if $use_pdo is true and doesn't if it's false. With the new copy of libmysql.dll, it's the opposite. if ($use_pdo){ $dbc = new PDO("mysql:host=$hostname;dbname=$dbname", $username, $password); echo 'Connected to database<br />'; $sql = "SELECT * FROM `employee`"; $dbc->query($sql); foreach ($dbc->query($sql) as $row){ echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } else { $dbc = @mysqli_connect($hostname, $username, $password, $dbname) OR die('Could not connect to MySQL: ' . mysqli_connect_error()); $sql = "SELECT * FROM `employee`"; $result = @mysqli_query($dbc, $sql) or die(mysqli_error($dbc)); while ($row = mysqli_fetch_array($result,MYSQLI_ASSOC)) { echo $row['firstname'] . ' ' . $row['lastname'] . "<br>\n"; } } What does Apache need in order to support both methods of database query?

    Read the article

  • Visual studio 2010 colourizers, intellisense and the rest. Where to start!!

    - by Owen
    Ok, before I begin I realize that there is a lot of documentation on this subject but I have thus far failed to get even basic colourization working for VS2010. My goal is to simply get to a point where I can open a document and everything is coloured red, from here I can implement the relevant parsing logic. Here's what I have tried/found: 1) Downloaded all the relevent SDK's and such- Found the ook sample (http://code.msdn.microsoft.com/ookLanguage) - didn't build, didn't work. 2) Knowing almost nothing about MEF read through "Implementing a Language Service By Using the Managed Package Framework" - http://msdn.microsoft.com/en-us/library/bb166533(v=VS.100).aspx This was pretty much a copy and paste of all the basic stuff here, and also updating some references which were out of date with the sample see: http://social.msdn.microsoft.com/Forums/en-US/vsx/thread/a310fe67-afd2-4592-b295-3fc86fec7996 Now, I have got to a point where when running the package MEF appears to have hooked up correctly (I know this because with the debugger open I can see that the packages initialize and FDoIdle methods are being hit). When I open a file of the extension I have registered with the ProvideLanguageExtensionAttribute everything dies as if in an endless loop, yet no debug symbols hit (though they are loaded). Looking at the ook sample and the MEF examples they seem to be totally different approaches to the same problem. In the ook sample there are notions of Clasifications and Completion controllers which aren't mentioned in the MEF example. Also, they don't seem to create a Package or Language service, so I have no idea how it should work? With the MEF example, my assumption is that I need to hook into the "IScanner.ScanTokenAndProvideInfoAboutIt" to provide syntax highlighting? Which would be fine if I could ever hit this method. So my first question I guess is which approach should I be taking here? Or do they both somehow tie together? My second questions is, where can I find a basic fully working project that implements bog standard basic syntax highlighting and intellisense or VS2010? Thirdly, in the MEF example when I created a Package there were a bunch of test projects created for me. I appears that the integration tests launch the VS2010 test rig somehow, but the test fails. It would be good to write my service with tests but I have no idea what/how I can test each interaction so any references to testing Language services would be helpful. Finally, please throw any resource/book links my way that I may find useful. Cheers, Chris. N.B. Sorry I realize this is part question part rant, but I have never been so confused.

    Read the article

  • How to Integrate C++ compiler in Visual Studio 2008

    - by Kasun
    Hi Can someone help me with this issue? I currently working on my project for final year of my honors degree. And we are developing a application to evaluate programming assignments of student ( for 1st year student level) I just want to know how to integrate C++ compiler using C# code to compile C++ code. In our case we are loading a student C++ code into text area, then with a click on button we want to compile the code. And if there any compilation errors it will be displayed on text area nearby. (Interface is attached herewith.) And finally it able to execute the code if there aren't any compilation errors. And results will be displayed in console. We were able to do this with a C#(C# code will be loaded to text area intead of C++ code) code using inbuilt compiler. But still not able to do for C# code. Can anyone suggest a method to do this? It is possible to integrate external compiler to VS C# code? If possible how to achieve it? Very grateful if anyone will contributing to solve this matter? This is code for Build button which we proceed with C# code compiling CodeDomProvider codeProvider = CodeDomProvider.CreateProvider("csharp"); string Output = "Out.exe"; Button ButtonObject = (Button)sender; rtbresult.Text = ""; System.CodeDom.Compiler.CompilerParameters parameters = new CompilerParameters(); //Make sure we generate an EXE, not a DLL parameters.GenerateExecutable = true; parameters.OutputAssembly = Output; CompilerResults results = codeProvider.CompileAssemblyFromSource(parameters, rtbcode.Text); if (results.Errors.Count > 0) { rtbresult.ForeColor = Color.Red; foreach (CompilerError CompErr in results.Errors) { rtbresult.Text = rtbresult.Text + "Line number " + CompErr.Line + ", Error Number: " + CompErr.ErrorNumber + ", '" + CompErr.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { //Successful Compile rtbresult.ForeColor = Color.Blue; rtbresult.Text = "Success!"; //If we clicked run then launch our EXE if (ButtonObject.Text == "Run") Process.Start(Output); // Run button }

    Read the article

  • assignment not working in a dll exported C++ class

    - by Jim Jones
    Using VS 2008 Have a C++ class in which I'm calling functions from a 3rd party dll. The definition in the header file is as follows: namespace OITImageExport { class ImageExport { private: SCCERR seResult; /* Error code returned. */ VTHDOC hDoc; /* Input doc handle returned by DAOpenDocument(). */ VTHEXPORT hExport; /* Handle to the export returned by EXOpenExport(). */ VTDWORD dwFIFlags; /* Used in setting the SCCOPT_FIFLAGS option. */ VTCHAR szError[256]; /* Error string buffer. */ VTDWORD dwOutputId; /* Output Format. */ VTDWORD dwSpecType; public: ImageExport(const char* outputId, const char* specType); void ProcessDocument(const char* inputPath, const char* outputPath); ~ImageExport(); }; } In the constructor I initialize two of the class fields having values which come from enumerations in the 3rd party dll: ImageExport::ImageExport(const char* outputId, const char* specType) { if(outputId == "jpeg") { dwOutputId = FI_JPEGFIF; } if(specType == "ansi") { dwSpecType = IOTYPE_ANSIPATH; } seResult = DAInit(); if (seResult != SCCERR_OK) { DAGetErrorString(seResult, szError, sizeof(szError)); fprintf(stderr, "DAInit() failed: %s (0x%04X)\n", szError, seResult); exit(seResult); } } When I use this class inside of a console app, with a main method in another file (all in the same namespace), instantiating the class object and calling the methods, it works like a champ. So, now that I know the basic code works, I open a dll project using the class header and code file. Course I have to add the dll macro, namely: #ifdef IMAGEDLL_EXPORTS #define DLL __declspec(dllexport) #else #define DLL __declspec(dllimport) #endif and changed the class definition to "class DLL ImageExport". Compiled nicely to a dll and .lib file (No errors, No warnings). Now to test this dll I open another console project using the same main method as before and linking to the (dll) lib file. Had problems, which when tracked down were the result of the two fields not being set; both had values of 0. Went back to the first console app and printed out the values: dwOutputId was 1535 (#define FI_JPEGFIF 1535) and dwSpecType was 2 (#define IOTYPE_ANSIPATH 2). Now if I was assigning these values outside of the class, I can see how the visibility could be different, but why is the assignment in the dll not working? Is it something about having a class in the dll?

    Read the article

  • Architecture Suggestions/Recommendations for a Web Application with Sub-Apps

    - by user579218
    Hello. I’m starting to plan an architecture for a big web application, and I wanted to get suggestions and/or recommendations on where to begin and which technologies and/or frameworks to use. The application will be an Intranet-based web site using Windows authentication, running on IIS and using SQL Server and ASP.NET. It’ll need to be structured as a main/shell application with sub-applications that are “pluggable” based on some configuration settings. The main or shell application is to provide the overall user interface structure – header/footer, dynamically built tabs for each available sub-app, and a content area in which the sub-application will be loaded when the user clicks on the sub-application’s tab. So, on start-up of the main/shell application, configuration information will be queried from a database, and, based on the user and which of the sub-apps are available, the main or shell app would dynamically build tabs (or buttons or something) as a way to access each individual application. On start-up, the content area will be populated with the “home” sub-app. But, clicking on an sub-app tab will cause the content area to be populated with the sub-app corresponding to the tab. For example, we’re going to have a reports application, a display application, and probably a couple other distinct applications. On startup of the main/shell application, after determining who the user is, the main app will query the database to determine which sub-apps the user can use and build out the UI. Then the user can navigate between available sub-apps and do their work in each. Finally, the entire app and all sub-apps need to be a layered design with presentation, service, business, and data access layers, as well as cross-cutting objects for things such as logging, exception handling, etc. Anyway, my questions revolve around where to begin to plan something like this application. What technologies/frameworks would work best in developing a solution for this application? MVC? MVP? WCSF? EF? NHibernate? Enterprise Library? Repository Pattern? Others???? I know all these technologies/frameworks are not used for the same purpose, but knowing which ones to focus on is a little overwhelming. Which ones would be the best choice(s) for a solution? Which ones work well together for an end-to-end design? How would one structure the VS project for something like this? Thanks!

    Read the article

  • C++ include statement required if defining a map in a headerfile.

    - by Justin
    I was doing a project for computer course on programming concepts. This project was to be completed in C++ using Object Oriented designs we learned throughout the course. Anyhow, I have two files symboltable.h and symboltable.cpp. I want to use a map as the data structure so I define it in the private section of the header file. I #include <map> in the cpp file before I #include "symboltable.h". I get several errors from the compiler (MS VS 2008 Pro) when I go to debug/run the program the first of which is: Error 1 error C2146: syntax error : missing ';' before identifier 'table' c:\users\jsmith\documents\visual studio 2008\projects\project2\project2\symboltable.h 22 Project2 To fix this I had to #include <map> in the header file, which to me seems strange. Here are the relevant code files: // symboltable.h #include <map> class SymbolTable { public: SymbolTable() {} void insert(string variable, double value); double lookUp(string variable); void init(); // Added as part of the spec given in the conference area. private: map<string, double> table; // Our container for variables and their values. }; and // symboltable.cpp #include <map> #include <string> #include <iostream> using namespace std; #include "symboltable.h" void SymbolTable::insert(string variable, double value) { table[variable] = value; // Creates a new map entry, if variable name already exist it overwrites last value. } double SymbolTable::lookUp(string variable) { if(table.find(variable) == table.end()) // Search for the variable, find() returns a position, if thats the end then we didnt find it. throw exception("Error: Uninitialized variable"); else return table[variable]; } void SymbolTable::init() { table.clear(); // Clears the map, removes all elements. }

    Read the article

  • Jquery, XML and Google Map

    - by EXPennD
    Hi, I'm integrating a Google Map in my website that user could add some thumbnails and details of their own house. Here's a code preview of what I want to happen. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> Jquery and Google Map // var locations = {}; function load() { var map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(47.614495, -122.341861), 13); GDownloadUrl("markerdata.xml", function(data) { var xml = GXml.parse(data); var markers = xml.documentElement.getElementsByTagName("marker"); for (var i = 0; i < markers.length; i++) { var name = markers[i].getAttribute("name"); var address = markers[i].getAttribute("address"); var type = markers[i].getAttribute("type"); var latlng = new GLatLng(parseFloat(markers[i].getAttribute("lat")), parseFloat(markers[i].getAttribute("lng"))); var store = {latlng: latlng, name: name, address: address, type: type}; var latlngHash = (latlng.lat().toFixed(6) + "" + latlng.lng().toFixed(6)); latlngHash = latlngHash.replace(".","").replace(".", "").replace("-",""); if (locations[latlngHash] == null) { locations[latlngHash] = [] } locations[latlngHash].push(store); } for (var latlngHash in locations) { var stores = locations[latlngHash]; if (stores.length > 1) { map.addOverlay(createClusteredMarker(stores)); } else { map.addOverlay(createMarker(stores)); } } }); } function createMarker(stores) { var store = stores[0]; var newIcon = MapIconMaker.createMarkerIcon({width: 32, height: 32, primaryColor: "#00ff00"}); var marker = new GMarker(store.latlng, {icon: newIcon}); var html = "<b>" + store.name + "</b> <br/>" + store.address; GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml(html); }); return marker; } function createClusteredMarker(stores) { var newIcon = MapIconMaker.createMarkerIcon({width: 44, height: 44, primaryColor: "#00ff00"}); var marker = new GMarker(stores[0].latlng, {icon: newIcon}); var html = ""; for (var i = 0; i < stores.length; i++) { html += "<b>" + stores[i].name + "</b> <br/>" + stores[i].address + "<br/>"; } GEvent.addListener(marker, 'click', function() { marker.openInfoWindowHtml(html); }); return marker; } //]]> description I want this feature to be fully interactive. If possible user can drag and drop a marker to the location on the Google map and the description field would be enabled after adding the marker so user could add details and submit it. Also here's my current situation. The reason why I want it to be done in XML is the Content Management System that I currently use for this project don't allow me to add Database and Php scripts. The only thing that I have access is I could add new HTML on the BODY section and also External Javascript on the HEAD section. Sorry about the way I write it, it sounds like demanding. Its because I'm still learning Jquery. Thanks everyone!

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >