Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 854/903 | < Previous Page | 850 851 852 853 854 855 856 857 858 859 860 861  | Next Page >

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • Are there any platforms where using structure copy on an fd_set (for select() or pselect()) causes p

    - by Jonathan Leffler
    The select() and pselect() system calls modify their arguments (the 'struct fd_set *' arguments), so the input value tells the system which file descriptors to check and the return values tell the programmer which file descriptors are currently usable. If you are going to call them repeatedly for the same set of file descriptors, you need to ensure that you have a fresh copy of the descriptors for each call. The obvious way to do that is to use a structure copy: struct fd_set ref_set_rd; struct fd_set ref_set_wr; struct fd_set ref_set_er; ... ...code to set the reference fd_set_xx values... ... while (!done) { struct fd_set act_set_rd = ref_set_rd; struct fd_set act_set_wr = ref_set_wr; struct fd_set act_set_er = ref_set_er; int bits_set = select(max_fd, &act_set_rd, &act_set_wr, &act_set_er, &timeout); if (bits_set > 0) { ...process the output values of act_set_xx... } } My question: Are there any platforms where it is not safe to do a structure copy of the struct fd_set values as shown? I'm concerned lest there be hidden memory allocation or anything unexpected like that. (There are macros/functions FD_SET(), FD_CLR(), FD_ZERO() and FD_ISSET() to mask the internals from the application.) I can see that MacOS X (Darwin) is safe; other BSD-based systems are likely to be safe, therefore. You can help by documenting other systems that you know are safe in your answers. (I do have minor concerns about how well the struct fd_set would work with more than 8192 open file descriptors - the default maximum number of open files is only 256, but the maximum number is 'unlimited'. Also, since the structures are 1 KB, the copying code is not dreadfully efficient, but then running through a list of file descriptors to recreate the input mask on each cycle is not necessarily efficient either. Maybe you can't do select() when you have that many file descriptors open, though that is when you are most likely to need the functionality.) There's a related SO question - asking about 'poll() vs select()' which addresses a different set of issues from this question.

    Read the article

  • Are there compelling reasons not to use Groovy?

    - by Leonard H Martin
    I'm developing a LoB application in Java after a long absence from the platform (having spent the last 8 years or so entrenched in Fortran, C, a smidgin of C++ and latterly .Net). Java, the language, is not much changed from how I remember it. I like it's strengths and I can work around its weaknesses - the platform has grown and deciding upon the myriad of different frameworks which appear to do much the same thing as one another is a different story; but that can wait for another day - all-in-all I'm comfortable with Java. However, over the last couple of weeks I've become enamoured with Groovy, and purely from a selfish point of view: but not just because it makes development against the JVM a more succinct and entertaining (and, well, "groovy") proposition than Java (the language). What strikes me most about Groovy is its inherent maintainability. We all (I hope!) strive to write well documented, easy to understand code. However, sometimes the languages we use themselves defeat us. An example: in 2001 I wrote a library in C to translate EDIFACT EDI messages into ANSI X12 messages. This is not a particularly complicated process, if slightly involved, and I thought at the time I had documented the code properly - and I probably had - but some six years later when I revisited the project (and after becoming acclimatised to C#) I found myself lost in so much C boilerplate (mallocs, pointers, etc. etc.) that it took three days of thoughtful analysis before I finally understood what I'd been doing six years previously. This evening I've written about 2000 lines of Java (it is the day of rest, after all!). I've documented as best as I know how, but, but, of those 2000 lines of Java a significant proportion is Java boiler plate. This is where I see Groovy and other dynamic languages winning through - maintainability and later comprehension. Groovy lets you concentrate on your intent without getting bogged down on the platform specific implementation; it's almost, but not quite, self documenting. I see this as being a huge boon to me when I revisit my current project (which I'll port to Groovy asap) in several years time and to my successors who will inherit it and carry on the good work. So, are there any reasons not to use Groovy?

    Read the article

  • C++ SQLDriverConnect API

    - by harshalkreddy
    Hi, I am using visual studio 2008 and sql server 2008 for developing application(SQL server is in my system). I need to fetch some fields from the database. I am using the SQLDriverConnect API to connect to the database. If I use the "SQL_DRIVER_PROMPT" I will get pop window to select the data source. I don't want this window to appear. As per my understanding this window will appear if we provide insufficient information in the connection string. I think I have provided all the information. I am trying to connect with windows authentication. I tried different options but still no luck. Please help me in solving this problem. Below is the code that I am using: //******************************************************************************** // SQLDriverConnect_ref.cpp // compile with: odbc32.lib user32.lib #include <windows.h> #include <sqlext.h> int main() { SQLHENV henv; SQLHDBC hdbc; SQLHSTMT hstmt; SQLRETURN retcode; SQLWCHAR OutConnStr[255]; SQLSMALLINT OutConnStrLen; SQLCHAR ConnStrIn[255] = "DRIVER={SQL Server};SERVER=(local);DSN=MyDSN;DATABASE=MyDatabase;Trusted_Connection=yes;"; //SQLWCHAR *ConntStr =(SQLWCHAR *) "DRIVER={SQL Server};DSN=MyDSN;"; HWND desktopHandle = GetDesktopWindow(); // desktop's window handle // Allocate environment handle retcode = SQLAllocHandle(SQL_HANDLE_ENV, SQL_NULL_HANDLE, &henv); // Set the ODBC version environment attribute if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLSetEnvAttr(henv, SQL_ATTR_ODBC_VERSION, (SQLPOINTER*)SQL_OV_ODBC3, 0); // Allocate connection handle if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLAllocHandle(SQL_HANDLE_DBC, henv, &hdbc); // Set login timeout to 5 seconds if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { SQLSetConnectAttr(hdbc, SQL_LOGIN_TIMEOUT, (SQLPOINTER)5, 0); retcode = SQLDriverConnect( // SQL_NULL_HDBC hdbc, desktopHandle, (SQLWCHAR *)ConnStrIn, SQL_NTS, OutConnStr, 255, &OutConnStrLen, SQL_DRIVER_NOPROMPT); // Allocate statement handle if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { retcode = SQLAllocHandle(SQL_HANDLE_STMT, hdbc, &hstmt); // Process data if (retcode == SQL_SUCCESS || retcode == SQL_SUCCESS_WITH_INFO) { SQLFreeHandle(SQL_HANDLE_STMT, hstmt); } SQLDisconnect(hdbc); } SQLFreeHandle(SQL_HANDLE_DBC, hdbc); } } SQLFreeHandle(SQL_HANDLE_ENV, henv); } } //******************************************************************************** Thanks in advance, Harsha

    Read the article

  • Generate A Simple Read-Only DAL?

    - by David
    I've been looking around for a simple solution to this, trying my best to lean towards something like NHibernate, but so far everything I've found seems to be trying to solve a slightly different problem. Here's what I'm looking at in my current project: We have an IBM iSeries database as a primary repository for a third party software suite used for our core business (a financial institution). Part of what my team does is write applications that report on or key off of a lot of this data in some way. In the past, we've been manually creating ADO .NET connections (we're using .NET 3.5 and Visual Studio 2008, by the way) and manually writing queries, etc. Moving forward, I'd like to simplify the process of getting data from there for the development team. Rather than creating connections and queries and all that each time, I'd much rather a developer be able to simply do something like this: var something = (from t in TableName select t); And, ideally, they'd just get some IQueryable or IEnumerable of generated entities. This would be done inside a new domain core that I'm building where these entities would live and the applications would interface with it through a request/response service layer. A few things to note are: The entities that correspond to the database tables should be generated once and we'd prefer to manually keep them updated over time. That is, if columns/tables are added to the database then we shouldn't have to do anything. (If some are deleted, of course, it will break, but that's fine.) But if we need to use a new column, we should be able to just add it to the necessary class(es) without having to re-gen the whole thing. The whole thing should be SELECT-only. We're not doing a full DAL here because we don't want to be able to break anything in the database (even accidentally). We don't need any kind of mapping between our domain objects and the generated entity types. The domain barely covers a fraction of the data that's in there, most of it we'll never need, and we would rather just create re-usable maps manually over time. I already have a logical separation for the DAL where my "repository" classes return domain objects, I'm just looking for a better alternative to manual ADO to be used inside the repository classes. Any suggestions? It seems like what I'm doing is just enough outside the normal demand for DAL/ORM tools/tutorials online that I haven't been able to find anything. Or maybe I'm just overlooking something obvious?

    Read the article

  • What are the steps to convert this function to a model/controller in Zend Framework?

    - by Joel
    Hi guys, I'm learning Zend Framework MVC, and I have a website that is mainly static php pages. However one of the pages is using functions, etc, and I'm trying to figure out what the process is for converting this to an OOP setup. Within the <body> I have this function (and more, but this is the first function): function filterEventDetails($contentText) { $data = array(); foreach($contentText as $row) { if(strstr($row, 'When: ')) { ##cleaning "when" string to get date in the format "May 28, 2009"## $data['duration'] = str_replace('When: ','',$row); list($when, ) = explode(' to ',$data['duration']); $data['when'] = substr($when,4); if(strlen($data['when'])>13) $data['when'] = trim(str_replace(strrchr($data['when'], ' '),'',$data['when'])); $data['duration'] = substr($data['duration'], 0, strlen($data['duration'])-4); //trimming time zone identifier (UTC etc.) } if(strstr($row, 'Where: ')) { $data['where'] = str_replace('Where: ','',$row); //pr($row); //$where = strstr($row, 'Where: '); //pr($where); } if(strstr($row, 'Event Description: ')) { $event_desc = str_replace('Event Description: ','',$row); //$event_desc = strstr($row, 'Event Description: '); ## Filtering event description and extracting venue, ticket urls etc from it. //$event_desc = str_replace('Event Description: ','',$contentText[3]); $event_desc_array = explode('|',$event_desc); array_walk($event_desc_array,'get_desc_second_part'); //pr($event_desc_array); $data['venue_url'] = $event_desc_array[0]; $data['details'] = $event_desc_array[1]; $data['tickets_url'] = $event_desc_array[2]; $data['tickets_button'] = $event_desc_array[3]; $data['facebook_url'] = $event_desc_array[4]; $data['facebook_icon'] = $event_desc_array[5]; } } return $data; } ?> So right now I have this in my example.phtml view page. I understand this needs to be a model and acted on by the controller, but I'm really not sure where to start with this conversion? This is a function tht is taking info from a Google calendar and parsing it for the view. Thanks for any help!

    Read the article

  • Excel 2008 Cant Parse HTML

    - by VictorV
    I need to export a gridview to excel, I put the return html code from the gridview to a HtmlTextWriter and put this into the response. The result file work fine in excel, excel can parse the html and the result is readable, work perfect on excel 2003 and 2007, but in some machines with Excel 2008 (MACOS) excel shows only the raw html code and can't process this html code. Any idea to configure excel? This is the code to convert: public static void ToExcel(GridView gridView, string fileName) { HttpResponse response = HttpContext.Current.Response; response.Clear(); response.Buffer = true; fileName = fileName.Replace(".xls", string.Empty) + ".xls"; response.AddHeader("content-disposition", "attachment;filename=" + fileName); response.Charset = ""; response.ContentEncoding = Encoding.Unicode; response.BinaryWrite(Encoding.Unicode.GetPreamble()); response.ContentType = MimeTypes.GetContentType(fileName); StringWriter sw = new StringWriter(); HtmlTextWriter hw = new HtmlTextWriter(sw); gridView.AllowPaging = false; //gridView.DataBind(); //Change the Header Row back to white color gridView.HeaderRow.Style.Add("background-color", "#FFFFFF"); //Apply style to Individual Cells for (int i = 0; i < gridView.HeaderRow.Cells.Count; i++) { gridView.HeaderRow.Cells[i].Style.Add("background-color", "yellow"); } for (int i = 0; i < gridView.Rows.Count; i++) { GridViewRow row = gridView.Rows[i]; //Change Color back to white row.BackColor = System.Drawing.Color.White; //Apply text style to each Row row.Attributes.Add("class", "textmode"); //Apply style to Individual Cells of Alternating Row if (i % 2 != 0) { for (int j = 0; j < row.Cells.Count; j++) { row.Cells[j].Style.Add("background-color", "#C2D69B"); } } } gridView.RenderControl(hw); //style to format numbers to string string style = @"<style> .textmode { mso-number-format:\@; } </style>"; response.Write(style); response.Output.Write(sw.ToString()); response.Flush(); response.End(); }

    Read the article

  • How to Integrate C++ compiler in Visual Studio 2008

    - by Kasun
    Hi Can someone help me with this issue? I currently working on my project for final year of my honors degree. And we are developing a application to evaluate programming assignments of student ( for 1st year student level) I just want to know how to integrate C++ compiler using C# code to compile C++ code. In our case we are loading a student C++ code into text area, then with a click on button we want to compile the code. And if there any compilation errors it will be displayed on text area nearby. (Interface is attached herewith.) And finally it able to execute the code if there aren't any compilation errors. And results will be displayed in console. We were able to do this with a C#(C# code will be loaded to text area intead of C++ code) code using inbuilt compiler. But still not able to do for C# code. Can anyone suggest a method to do this? It is possible to integrate external compiler to VS C# code? If possible how to achieve it? Very grateful if anyone will contributing to solve this matter? This is code for Build button which we proceed with C# code compiling CodeDomProvider codeProvider = CodeDomProvider.CreateProvider("csharp"); string Output = "Out.exe"; Button ButtonObject = (Button)sender; rtbresult.Text = ""; System.CodeDom.Compiler.CompilerParameters parameters = new CompilerParameters(); //Make sure we generate an EXE, not a DLL parameters.GenerateExecutable = true; parameters.OutputAssembly = Output; CompilerResults results = codeProvider.CompileAssemblyFromSource(parameters, rtbcode.Text); if (results.Errors.Count > 0) { rtbresult.ForeColor = Color.Red; foreach (CompilerError CompErr in results.Errors) { rtbresult.Text = rtbresult.Text + "Line number " + CompErr.Line + ", Error Number: " + CompErr.ErrorNumber + ", '" + CompErr.ErrorText + ";" + Environment.NewLine + Environment.NewLine; } } else { //Successful Compile rtbresult.ForeColor = Color.Blue; rtbresult.Text = "Success!"; //If we clicked run then launch our EXE if (ButtonObject.Text == "Run") Process.Start(Output); // Run button }

    Read the article

  • .NET Free memory usage (how to prevent overallocation / release memory to the OS)

    - by Ronan Thibaudau
    I'm currently working on a website that makes large use of cached data to avoid roundtrips. At startup we get a "large" graph (hundreds of thouthands of different kinds of objects). Those objects are retrieved over WCF and deserialized (we use protocol buffers for serialization) I'm using redgate's memory profiler to debug memory issues (the memory didn't seem to fit with how much memory we should need "after" we're done initializing and end up with this report Now what we can gather from this report is that: 1) Most of the memory .NET allocated is free (it may have been rightfully allocated during deserialisation, but now that it's free, i'd like for it to return to the OS) 2) Memory is fragmented (which is bad, as everytime i refresh the cash i need to redo the memory hungry deserialisation process and this, in turn creates large object that may throw an OutOfMemoryException due to fragmentation) 3) I have no clue why the space is fragmented, because when i look at the large object heap, there are only 30 instances, 15 object[] are directly attached to the GC and totally unrelated to me, 1 is a char array also attached directly to the GC Heap, the remaining 15 are mine but are not the cause of this as i get the same report if i comment them out in code. So my question is, what can i do to go further with this? I'm not really sure what to look for in debugging / tools as it seems my memory is fragmented, but not by me, and huge amounts of free spaces are allocated by .net , which i can't release. Also please make sure you understand the question well before answering, i'm not looking for a way to free memory within .net (GC.Collect), but to free memory that is already free in .net , to the system as well as to defragment said memory. Note that a slow solution is fine, if it's possible to manually defragment the large heap i'd be all for it as i can call it at the end of RefreshCache and it's ok if it takes 1 or 2 second to run. Thanks for your help! A few notes i forgot: 1) The project is a .net 2.0 website, i get the same results running it in a .net 4 pool, idem if i run it in a .net 4 pool and convert it to .net 4 and recompile. 2) These are results of a release build, so debug build can not be the issue. 3) And this is probably quite important, i do not get these issues at all in the webdev server, only in IIS, in the webdev i get memory consumption rather close to my actual consumption (well more, but not 5-10X more!)

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • Store Observer not being called always

    - by Nixarn
    Has anyone else here experienced problems with their Store Observer class not being called always when the user for instance cancels a request (or purchases something) We just had our update that brought in app purchases go live last night, and before that we had obviously tested everything tons of times against the Sandbox and everything was working fine. Now however, when the update went live in a real environment we keep getting issues with the store. For instance, in a freshly booted iPhone / iPod, the first time you run the app, if you then try to make a purchase and then immediately cancel it from the first dialog, it seems as if the callback for the cancel is not getting called. If you then restart the app it seems as if it always works after that, or at least. Same thing with other callbacks, seems as if our store observer isn't listening as the callbacks aren't being registered on the phone. One example of this is if you purchase something, then nothing will happen (if this is the first time the app is launched at least). You get the purchase successful dialog from the app store but it seems as if our own code isn't called. If you then quit the app and restart it the callback gets called. Same problem happens if you for instance try to start a request to download all previous purchases and then immediately cancel it as the first dialog pops up, if you do that then the callback for a failed restore is not called, until you then restart the app and try it again, then it always seems to work. The way we have implemented our store observer is by creating a custom class that's implements the SKPaymentTransactionObserver interface. @interface StoreObserver : NSObject<SKPaymentTransactionObserver> In the class we have implemented the following methods: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions - (void)paymentQueueRestoreCompletedTransactionsFinished:(SKPaymentQueue *)queue - (void)paymentQueue:(SKPaymentQueue *)queue restoreCompletedTransactionsFailedWithError:(NSError *)error The way our restore process works is that if you tap on the button that allows you to download all we simply run the restoreCompletedTransactions code as follows: [[SKPaymentQueue defaultQueue] restoreCompletedTransactions]; However, the callback, restoreCompletedTransactionsFailedWithError, which has been implemented in the store observer, does not always get called when we try to cancel the request. This happens when you boot the iPhone / iPod and try this for the first time. If you after that restart the app everything works fine. The StoreObserver class is created when our app is launched, just by running the following code: pStoreObserver = [[StoreObserver alloc] init]; [[SKPaymentQueue defaultQueue] addTransactionObserver:pStoreObserver]; Has anyone else had any similar experiences? Or does anyone have any suggestions on how to solve this? As I said, in the sandbox environment everything was working fine, no issues whatsoever, but now once it's gone live we're experiencing these.

    Read the article

  • Thread feeding other MultiThreading

    - by alaamh
    I see it's easy to open pipe between two process using fork, but how we can passing open pipe to threads. Assume we need to pass out of PROGRAM A to PROGRAM B "may by more than one thread", PROGRAM B send his output to PROGRAM C #include <stdio.h> #include <stdlib.h> #include <pthread.h> struct targ_s { char* reader; }; void *thread1(void *arg) { struct targ_s *targ = (struct targ_s*) arg; int status, fd[2]; pid_t pid; pipe(fd); pid = fork(); if (pid == 0) { int fd = fileno( targ->fd_reader ); dup2(STDIN_FILENO, fd); close(fd[0]); dup2(fd[1], STDOUT_FILENO); close(fd[1]); execvp ("PROGRAM B", NULL); exit(1); } else { close(fd[1]); dup2(fd[0], STDIN_FILENO); close(fd[0]); execl("PROGRAM C", NULL); wait(&status); return NULL; } } int main(void) { FILE *fpipe; char *command = "PROGRAM A"; char buffer[1024]; if (!(fpipe = (FILE*) popen(command, "r"))) { perror("Problems with pipe"); exit(1); } char* outfile = "out.dat"; FILE* f = fopen (outfile, "wb"); int fd = fileno( f ); struct targ_s targ; targ.fd_reader = outfile; pthread_t thid; if (pthread_create(&thid, NULL, thread1, &targ) != 0) { perror("pthread_create() error"); exit(1); } int len; while (read(fpipe, buffer, sizeof (buffer)) != 0) { len = strlen(buffer); write(fd, buffer, len); } pclose(fpipe); return (0); }

    Read the article

  • How to prevent mvn jetty:run from executing test phase?

    - by tputkonen
    We use MySQL in production, and Derby for unit tests. Our pom.xml copies Derby version of persistence.xml before tests, and replaces it with the MySQL version in prepare-package phase: <plugin> <artifactId>maven-antrun-plugin</artifactId> <version>1.3</version> <executions> <execution> <id>copy-test-persistence</id> <phase>process-test-resources</phase> <configuration> <tasks> <!--replace the "proper" persistence.xml with the "test" version--> <copy file="${project.build.testOutputDirectory}/META-INF/persistence.xml.test" tofile="${project.build.outputDirectory}/META-INF/persistence.xml" overwrite="true" verbose="true" failonerror="true" /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> <execution> <id>restore-persistence</id> <phase>prepare-package</phase> <configuration> <tasks> <!--restore the "proper" persistence.xml--> <copy file="${project.build.outputDirectory}/META-INF/persistence.xml.production" tofile="${project.build.outputDirectory}/META-INF/persistence.xml" overwrite="true" verbose="true" failonerror="true" /> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> The problem is, that if I execute mvn jetty:run it will execute the test persistence.xml file copy task before starting jetty. I want it to be run using the deployment version. How can I fix this?

    Read the article

  • Why does DebugActiveProcessStop crash my debugging app?

    - by SparkyNZ
    I have a debugging program which I've written to attach to a process and create a crash dump file. That part works fine. The problem I have is that when the debugger program terminates, so does the program that it was debugging. I did some Googling and found the DebugActiveProcessStop() API call. This didn't show up in my older MSDN documentation as it was only introduced in Windows XP so I've tried loading it dynamicall from Kernel32.dll at runtime. Now my problem is that my debugger program crashes as soon as the _DebugActiveProcessStop() call is made. Can somebody please tell me what I'm doing wrong? typedef BOOL (*DEBUGACTIVEPROCESSSTOP)(DWORD); DEBUGACTIVEPROCESSSTOP _DebugActiveProcessStop; HMODULE hK32 = LoadLibrary( "kernel32.dll" ); if( hK32 ) _DebugActiveProcessStop = (DEBUGACTIVEPROCESSSTOP) GetProcAddress( hK32,"DebugActiveProcessStop" ); else { printf( "Can't load Kernel32.dll\n" ); return; } if( ! _DebugActiveProcessStop ) { printf( "Can't find DebugActiveProcessStop\n" ); return; } ... void DebugLoop( void ) { DEBUG_EVENT de; while( 1 ) { WaitForDebugEvent( &de, INFINITE ); switch( de.dwDebugEventCode ) { case CREATE_PROCESS_DEBUG_EVENT: hProcess = de.u.CreateProcessInfo.hProcess; break; case EXCEPTION_DEBUG_EVENT: // PDS: I want a crash dump immediately! dwProcessId = de.dwProcessId; dwThreadId = de.dwThreadId; WriteCrashDump( &de.u.Exception ); return; case CREATE_THREAD_DEBUG_EVENT: case OUTPUT_DEBUG_STRING_EVENT: case EXIT_THREAD_DEBUG_EVENT: case EXIT_PROCESS_DEBUG_EVENT : case LOAD_DLL_DEBUG_EVENT: case UNLOAD_DLL_DEBUG_EVENT: case RIP_EVENT: default: break; } ContinueDebugEvent( de.dwProcessId, de.dwThreadId, DBG_CONTINUE ); } } ... void main( void ) { ... BOOL bo = DebugActiveProcess( dwProcessId ); if( bo == 0 ) printf( "DebugActiveProcess failed, GetLastError: %u \n",GetLastError() ); hProcess = OpenProcess( PROCESS_ALL_ACCESS, TRUE, dwProcessId ); if( hProcess == NULL ) printf( "OpenProcess failed, GetLastError: %u \n",GetLastError() ); DebugLoop(); _DebugActiveProcessStop( dwProcessId ); CloseHandle( hProcess ); }

    Read the article

  • How to reliably categorize HTTP sessions in proxy to corresponding browser' windows/tabs user is viewing?

    - by Jehonathan
    I was using the Fiddler core .Net library as a local proxy to record the user activity in web. However I ended up with a problem which seems dirty to solve. I have a web browser say Google Chrome, and the user opened like 10 different tabs each with different web URLs. The problem is that the proxy records all the HTTP session initiated by each pages separately, causing me to figure out using my intelligence the tab which the corresponding HTTP session belonged to. I understand that this is because of the stateless nature of HTTP protocol. However I am just wondering is there an easy way to do this? I ended up with below c# code for that in Fiddler. Still its not a reliable solution due to the heuristics. This is a modification of the sample project bundled with Fiddler core for .NET 4. Basically what it does is filtering HTTP sessions initiated in last few seconds to find the first request or switching to another page made by the same tab in browser. It almost works, but not seems to be a universal solution. Fiddler.FiddlerApplication.AfterSessionComplete += delegate(Fiddler.Session oS) { //exclude other HTTP methods if (oS.oRequest.headers.HTTPMethod == "GET" || oS.oRequest.headers.HTTPMethod == "POST") //exclude other HTTP Status codes if (oS.oResponse.headers.HTTPResponseStatus == "200 OK" || oS.oResponse.headers.HTTPResponseStatus == "304 Not Modified") { //exclude other MIME responses (allow only text/html) var accept = oS.oRequest.headers.FindAll("Accept"); if (accept != null) { if(accept.Count>0) if (accept[0].Value.Contains("text/html")) { //exclude AJAX if (!oS.oRequest.headers.Exists("X-Requested-With")) { //find the referer for this request var referer = oS.oRequest.headers.FindAll("Referer"); //if no referer then assume this as a new request and display the same if(referer!=null) { //if no referer then assume this as a new request and display the same if (referer.Count > 0) { //lock the sessions Monitor.Enter(oAllSessions); //filter further using the response if (oS.oResponse.MIMEType == string.Empty || oS.oResponse.MIMEType == "text/html") //get all previous sessions with the same process ID this session request if(oAllSessions.FindAll(a=>a.LocalProcessID == oS.LocalProcessID) //get all previous sessions within last second (assuming the new tab opened initiated multiple sessions other than parent) .FindAll(z => (z.Timers.ClientBeginRequest > oS.Timers.ClientBeginRequest.AddSeconds(-1))) //get all previous sessions that belongs to the same port of the current session .FindAll(b=>b.port == oS.port ).FindAll(c=>c.clientIP ==oS.clientIP) //get all previus sessions with the same referrer URL of the current session .FindAll(y => referer[0].Value.Equals(y.fullUrl)) //get all previous sessions with the same host name of the current session .FindAll(m=>m.hostname==oS.hostname).Count==0 ) //if count ==0 that means this is the parent request Console.WriteLine(oS.fullUrl); //unlock sessions Monitor.Exit(oAllSessions); } else Console.WriteLine(oS.fullUrl); } else Console.WriteLine(oS.fullUrl); Console.WriteLine(); } } } } };

    Read the article

  • JPA Database strcture for internationalisation

    - by IrishDubGuy
    I am trying to get a JPA implementation of a simple approach to internationalisation. I want to have a table of translated strings that I can reference in multiple fields in multiple tables. So all text occurrences in all tables will be replaced by a reference to the translated strings table. In combination with a language id, this would give a unique row in the translated strings table for that particular field. For example, consider a schema that has entities Course and Module as follows :- Course int course_id, int name, int description Module int module_id, int name The course.name, course.description and module.name are all referencing the id field of the translated strings table :- TranslatedString int id, String lang, String content That all seems simple enough. I get one table for all strings that could be internationalised and that table is used across all the other tables. How might I do this in JPA, using eclipselink 2.4? I've looked at embedded ElementCollection, ala this... JPA 2.0: Mapping a Map - it isn't exactly what i'm after cos it looks like it is relating the translated strings table to the pk of the owning table. This means I can only have one translatable string field per entity (unless I add new join columns into the translatable strings table, which defeats the point, its the opposite of what I am trying to do). I'm also not clear on how this would work across entites, presumably the id of each entity would have to use a database wide sequence to ensure uniqueness of the translatable strings table. BTW, I tried the example as laid out in that link and it didn't work for me - as soon as the entity had a localizedString map added, persisting it caused the client side to bomb but no obvious error on the server side and nothing persisted in the DB :S I been around the houses on this about 9 hours so far, I've looked at this Internationalization with Hibernate which appears to be trying to do the same thing as the link above (without the table definitions it hard to see what he achieved). Any help would be gratefully achieved at this point... Edit 1 - re AMS anwser below, I'm not sure that really addresses the issue. In his example it leaves the storing of the description text to some other process. The idea of this type of approach is that the entity object takes the text and locale and this (somehow!) ends up in the translatable strings table. In the first link I gave, the guy is attempting to do this by using an embedded map, which I feel is the right approach. His way though has two issues - one it doesn't seem to work! and two if it did work, it is storing the FK in the embedded table instead of the other way round (I think, I can't get it to run so I can't see exactly how it persists). I suspect the correct approach ends up with a map reference in place of each text that needs translating (the map being locale-content), but I can't see how to do this in a way that allows for multiple maps in one entity (without having corresponding multiple columns in the translatable strings table)...

    Read the article

  • php regex guitar tab (tabs or tablature, a type of music notation)

    - by John
    I am in the process of creating a guitar tab to rtttl (Ring Tone Text Transfer Language) converter in PHP. In order to prepare a guitar tab for rtttl conversion I first strip out all comments (comments noted by #- and ended with -#), I then have a few lines that set tempo, note the tunning and define multiple instruments (Tempo 120\nDefine Guitar 1\nDefine Bass 1, etc etc) which are stripped out of the tab and set aside for later use. Now I essentially have nothing left except the guitar tabs. Each tab is prefixed with it's instrument name in conjunction with the instrument name noted prior. Some times we have tabs for 2 separate instruments that are linked because they are to be played together, ie a Guitar and a Bass Guitar playing together. Example 1, Standard Guitar Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| Example 2, Conjunction Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| I have considered other methods of identifying the tabs with no solid results. I am hoping that someone who does regular expressions could help me find a way to identify a single guitar tab and if possible also be able to match a tab with multiple instruments linked together. Once the tabs are in an array I will go through them one line at a time and convert them into rtttl lines (exploded at each new line "\n"). I do not want to separate the guitar tabs in the document via explode "\n\n" or something similar because it does not identify the guitar tab, rather, it is identifying the space between the tabs - not on the tabs themselves. I have been messing with this for about a week now and this is the only major hold up I have. Everything else is fairly simple. As of current, I have tried many variations of the regex pattern. Here is one of the most recent test samples: <?php $t = " |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| "; preg_match_all("/^.*?(\\|).*?(\\|)/is",$t,$p); print_r($p); ?> It is also worth noting that inside the tabs, where the dashes and #'s are, you may also have any variation of letters, numbers and punctuation. The beginning of each line marks the tuning of each string with one of the following case insensitive: a,a#,b,c,c#,d,d#,e,f,f#,g or g. Thanks in advance for help with this most difficult problem.

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

  • How to return this XML-RPC response in an array using PHP?

    - by mind.blank
    I'm trying to put together a WordPress plugin and I want to grab a list of all categories (of other WordPress blogs) via XML-RPC. I have the following code and it looks like it works so far: function get_categories($rpcurl,$username,$password){ $rpcurl2 = $rpcurl."/xmlrpc.php"; $params = array(0,$username,$password,true); $request = xmlrpc_encode_request('metaWeblog.getCategories',$params); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $rpcurl2); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_HTTPHEADER, array("Content-Type: text/xml")); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 10); curl_setopt($ch, CURLOPT_POSTFIELDS, $request); $results = curl_exec($ch); $res = xmlrpc_decode($results); curl_close($ch); return $res; } If I use $res I get the following string as the response: Array If I use $results then I get: categoryId17 parentId0 descriptionTest categoryDescription categoryNameTest htmlUrlhttp://test.yoursite.com/?cat=17 rssUrlhttp://test.yoursite.com/?feed=rss2&amp;cat=17 categoryId1 parentId0 descriptionUncategorized categoryDescription categoryNameUncategorized htmlUrlhttp://test.yoursite.com/?cat=1 rssUrlhttp://test.yoursite.com/?feed=rss2&amp;cat=1 I need to pull out the names after description so Uncategorized and Test in this case. It's my first time coding in PHP. I got these results by echoing them to the page, so not sure if they get changed in that process or not... By the way I modified the above code from one that posts to a WordPress blog remotely so maybe I haven't set some of the options correctly? With var_dump($res) I get: array(2) { [0]=> array(7) { ["categoryId"]=> string(2) "17" ["parentId"]=> string(1) "0" ["description"]=> string(4) "Test" ["categoryDescription"]=> string(0) "" ["categoryName"]=> string(4) "Test" ["htmlUrl"]=> string(40) "http://test.youreventwebsite.com/?cat=17" ["rssUrl"]=> string(54) "http://test.youreventwebsite.com/?feed=rss2&cat=17" } [1]=> array(7) { ["categoryId"]=> string(1) "1" ["parentId"]=> string(1) "0" ["description"]=> string(13) "Uncategorized" ["categoryDescription"]=> string(0) "" ["categoryName"]=> string(13) "Uncategorized" ["htmlUrl"]=> string(39) "http://test.youreventwebsite.com/?cat=1" ["rssUrl"]=> string(53) "http://test.youreventwebsite.com/?feed=rss2&cat=1" } }

    Read the article

  • how to set footer image in this video screen?

    - by bala
    hi my problem is how to create footer image in this video screen while playing.... how to create this format. now i am give my description: • 1) Header image, a stretched background image. The location of this external image comes from the application xml; • 2) Footer image, a stretched background image. The location of this external image comes from the application xml; 2.a) the copyright, disclaimer and buy block, this block contains links to popup windows that contain a copyright and or disclaimer. And an option to buy the application for the advertisement less version. The content of this block is fed trough the application XML feed. The color of the text is fed by the application xml plus the popup links and texts itself; • 3) Carousel image, a stretched background image. The location of this external image comes from the application xml; 3.a) the carousel contains objects that can flow from right to left, possibly trough a animation (a soft break of the slide). The first object is centered in the middle of the carousel. This is the first element in the video feed. All the subsequent video object are added to the right of the centered object; 4) Total Video object, this object links to window two with the corresponding video of this object. This object is visually build out of the following sub parts:o 4.a) Thumb object (possible playing video thumb); 4.b) Reflection of the Thumb; 4.c) Textual Explanation of Thumb. 1) Video stream, this is the video stream coming from a external server streamed to the television (maybe up scaled) as 720p stream; 2) Advertisement, the type of advertisement shown overlaid on the video is based on previous settings in the video feed. This could mean that Admob, Adsense or a third party image plus URL could be shown. When the advertisement is selected trough navigation (it will highlight in a different color as a border around the advertisement. The color an thickness can be managed trough the application xml), when clicked a browser will open with the associated site (the application will be pushed to the background process, when the user is finished it will return to the app); 3) A back button, an image and navigational element. The location of the image comes from the application xml. The button is only shown when a cursor is moved (a button is pressed on the remote) it will highlight when selected and when pressed will forward the screen to the main window. When the main window is opened the video will be removed from cache and memory and cannot be start from the point it was exited. please give me your idea....

    Read the article

  • ASP.NET MVC 2 Preview 2 Route Request Not Working

    - by Kezzer
    Here's the error: The incoming request does not match any route. Basically I upgraded from Preview 1 to Preview 2 and got rid of a load of redundant stuff in relation to areas (as described by Phil Haack). It didn't work so I created a brand new project to check out how its dealt with in Preview 2. The file Default.aspx no longer exists which contains the following: public void Page_Load(object sender, System.EventArgs e) { // Change the current path so that the Routing handler can correctly interpret // the request, then restore the original path so that the OutputCache module // can correctly process the response (if caching is enabled). string originalPath = Request.Path; HttpContext.Current.RewritePath(Request.ApplicationPath, false); IHttpHandler httpHandler = new MvcHttpHandler(); httpHandler.ProcessRequest(HttpContext.Current); HttpContext.Current.RewritePath(originalPath, false); } The error I received points to the line httpHandler.ProcessRequest(HttpContext.Current); yet in newer projects none of this even exists. To test it, I quickly deleted Default.aspx but then absolutely nothing worked, I didn't even receive any errors. Here's some code extracts: Global.asax.cs using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace Intranet { public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); AreaRegistration.RegisterAllAreas(); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); } protected void App_Start() { RegisterRoutes(RouteTable.Routes); } } } Notice the area registration as that's what I'm using. Routes.cs using System.Web.Mvc; namespace Intranet.Areas.Accounts { public class Routes : AreaRegistration { public override string AreaName { get { return "Accounts"; } } public override void RegisterArea(AreaRegistrationContext context) { context.MapRoute("Accounts_Default", "Accounts/{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }); } } } Check the latest docs for more info on this part. It's to register the area. The Routes.cs files are located in the root folder of each area. Cheers

    Read the article

  • Persistence scheme & state data for low memory situations (iphone)

    - by Robin Jamieson
    What happens to state information held by a class's variable after coming back from a low memory situation? I know that views will get unloaded and then reloaded later but what about some ancillary classes & data held in them that's used by the controller that launched the view? Sample scenario in question: @interface MyCustomController: UIViewController { ServiceAuthenticator *authenticator; } -(id)initWithAuthenticator:(ServiceAuthenticator *)auth; // the user may press a button that will cause the authenticator // to post some data to the service. -(IBAction)doStuffButtonPressed:(id)sender; @end @interface ServiceAuthenticator { BOOL hasValidCredentials; // YES if user's credentials have been validated NSString *username; NSString *password; // password is not stored in plain text } -(id)initWithUserCredentials:(NSString *)username password:(NSString *)aPassword; -(void)postData:(NSString *)data; @end The app delegate creates the ServiceAuthenticator class with some user data (read from plist file) and the class logs the user with the remote service. inside MyAppDelegate's applicationDidFinishLaunching: - (void)applicationDidFinishLaunching:(UIApplication *)application { ServiceAuthenticator *auth = [[ServiceAuthenticator alloc] initWithUserCredentials:username password:userPassword]; MyCustomController *controller = [[MyCustomController alloc] initWithNibName:...]; controller.authenticator = auth; // Configure and show the window [window addSubview:..]; // make everything visible [window makeKeyAndVisible]; } Then whenever the user presses a certain button, 'MyCustomController's doStuffButtonPressed' is invoked. -(IBAction)doStuffButtonPressed:(id)sender { [authenticator postData:someDataFromSender]; } The authenticator in-turn checks to if the user is logged in (BOOL variable indicates login state) and if so, exchanges data with the remote service. The ServiceAuthenticator is the kind of class that validates the user's credentials only once and all subsequent calls to the object will be to postData. Once a low memory scenario occurs and the associated nib & MyCustomController will get unloaded -- when it's reloaded, what's the process for resetting up the 'ServiceAuthenticator' class & its former state? I'm periodically persisting all of the data in my actual model classes. Should I consider also persisting the state data in these utility style classes? Is that the pattern to follow?

    Read the article

  • c# video equivalent to image.fromstream? Or changing the scope of the following script to allow vide

    - by Daniel
    The following is a part of an upload class in a c# script. I'm a php programmer, I've never messed with c# much but I'm trying to learn. This upload script will not handle anything except images, I need to adapt this class to handle other types of media also, or rewrite it all together. If I'm correct, I realize that using (Image image = Image.FromStream(file.InputStream)) basically says that the scope of the following is Image, only an image can be used or the object is discarded? And also that the variable image is being created from an Image from the file stream, which I understand to be, like... the $_FILES array in php? I dunno, I don't really care about making thumbnails right now either way, so if this can be taken out and still process the upload I'm totally cool with that, I just haven't had any luck getting this thing to take anything but images, even when commenting out that whole part of the class... protected void Page_Load(object sender, EventArgs e) { string dir = Path.Combine(Request.PhysicalApplicationPath, "files"); if (Request.Files.Count == 0) { // No files were posted Response.StatusCode = 500; } else { try { // Only one file at a time is posted HttpPostedFile file = Request.Files[0]; // Size limit 100MB if (file.ContentLength > 102400000) { // File too large Response.StatusCode = 500; } else { string id = Request.QueryString["userId"]; string[] folders = userDir(id); foreach (string folder in folders) { dir = Path.Combine(dir, folder); if (!Directory.Exists(dir)) Directory.CreateDirectory(dir); } string path = Path.Combine(dir, String.Concat(Request.QueryString["batchId"], "_", file.FileName)); file.SaveAs(path); // Create thumbnail int dot = path.LastIndexOf('.'); string thumbpath = String.Concat(path.Substring(0, dot), "_thumb", path.Substring(dot)); using (Image image = Image.FromStream(file.InputStream)) { // Find the ratio that will create maximum height or width of 100px. double ratio = Math.Max(image.Width / 100.0, image.Height / 100.0); using (Image thumb = new Bitmap(image, new Size((int)Math.Round(image.Width / ratio), (int)Math.Round(image.Height / ratio)))) { using (Graphics graphic = Graphics.FromImage(thumb)) { // Make sure thumbnail is not crappy graphic.SmoothingMode = SmoothingMode.HighQuality; graphic.InterpolationMode = InterpolationMode.High; graphic.CompositingQuality = CompositingQuality.HighQuality; // JPEG ImageCodecInfo codec = ImageCodecInfo.GetImageEncoders()[1]; // 90% quality EncoderParameters encode = new EncoderParameters(1); encode.Param[0] = new EncoderParameter(Encoder.Quality, 90L); // Resize graphic.DrawImage(image, new Rectangle(0, 0, thumb.Width, thumb.Height)); // Save thumb.Save(thumbpath, codec, encode); } } } // Success Response.StatusCode = 200; } } catch { // Something went wrong Response.StatusCode = 500; } } }

    Read the article

  • why does .apk is not getting installed in android emulator ?

    - by Saravana
    I tried the following code with android 2.3.3 (AVD). When i run this code it waits saying Waiting for HOME ('android.process.acore') to be launched... but keeps on waiting. So i tried running second time .. this time it says [2011-03-04 12:28:39 - DialANumber] Uploading DialANumber.apk onto device 'emulator-5554' [2011-03-04 12:28:39 - DialANumber] Installing DialANumber.apk... [2011-03-04 12:29:14 - DialANumber] HOME is up on device 'emulator-5554' [2011-03-04 12:29:14 - DialANumber] Uploading DialANumber.apk onto device 'emulator-5554' [2011-03-04 12:29:14 - DialANumber] Installing DialANumber.apk... and after some time fails with [2011-03-04 12:31:37 - DialANumber] Failed to install DialANumber.apk on device 'emulator-5554! [2011-03-04 12:31:37 - DialANumber] (null) [2011-03-04 12:31:39 - DialANumber] Launch canceled! the code follows: package com.DialANumber; import android.app.Activity; import android.content.Intent; import android.net.Uri; import android.os.Bundle; import android.view.KeyEvent; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.LinearLayout; public class DialANumber extends Activity { EditText mEditText_number = null; LinearLayout mLinearLayout_no_button = null; Button mButton_dial = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mLinearLayout_no_button = new LinearLayout(this); mEditText_number = new EditText(this); mEditText_number.setText("5551222"); mLinearLayout_no_button.addView(mEditText_number); mButton_dial = new Button(this); mButton_dial.setText("Dial!"); mLinearLayout_no_button.addView(mButton_dial); mButton_dial.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { performDial(); } }); setContentView(mLinearLayout_no_button); } public boolean onKeyDown(int keyCode, KeyEvent event) { if (keyCode == KeyEvent.KEYCODE_CALL) { performDial(); return true; } return false; } public void performDial(){ if(mEditText_number!=null){ try { startActivity(new Intent(Intent.ACTION_CALL, Uri.parse("tel:" + mEditText_number.getText()))); } catch (Exception e) { e.printStackTrace(); } }//if } } I am just starting to learn developing android apps. please help me out.. Thanks.

    Read the article

  • A new MEF error I've not seen before -- "The export is not assignable to type..."

    - by Dave
    I was very surprised to get this error today, as it's one that I've never encountered before. Everything in the code looked okay, so I did some searches. The previous questions and their respective answers didn't help. This one was solved when the poster made sure his assembly references were consistent. I don't have this issue right now because I'm currently referencing another project in my solution. This one was solved when the poster was instructed to use ImportMany, but I am already using it (I think properly, too) to try to load multiple plugins This one was solved when the poster realized that there was a platform target mismatch. I've already gone through my projects to ensure that everything targets x86. So here's what I am trying to do. I have a plugin that owns a connection to a device. I might also need to be able to share that connection with another plugin. I decided that the cleanest way to do this was to create an interface that would allow the slave plugin to request its own connection to the device. Let's just call it IConnectionSharer. If the slave plugin does not need to borrow this connection and has its own, then it should use its own implementation of IConnectionSharer to connect to the device. My "master" plugin (the one that owns the connection to the device) implements IConnectionSharer. It also exports this via ExportAttribute. My "slave" plugin assembly defines a class that also implements and exports IConnectionSharer. When the application loads, the intent is for my slave plugin, via MEF, to enumerate all IConnectionSharers and store them in an IEnumerable<IConnectionSharer>. It does so like this: [ImportMany] public IEnumerable<IConnectionSharer> AllSharedConnections { get; set; } But during part composition, I get the error the export 'Company.MasterPlugin (ContractName="IConnectionSharer")' is not assignable to type 'IConnectionSharer'. The error message itself seems clear enough -- it's as if MEF thinks my master plugin doesn't inherit from IConnectionSharer... but it does! Can anyone suggest further debugging strategies? I'm going to start the painful process of single stepping through the MEF source.

    Read the article

< Previous Page | 850 851 852 853 854 855 856 857 858 859 860 861  | Next Page >