Search Results

Search found 8219 results on 329 pages for 'less'.

Page 245/329 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • Properties vs. Fields: Need help grasping the uses of Properties over Fields.

    - by pghtech
    First off, I have read through a list of postings on this topic and I don't feel I have grasped properties because of what I had come to understand about encapsulation and field modifiers (private, public..ect). One of the main aspects of C# that I have come to learn is the importance of data protection within your code by the use of encapsulation. I 'thought' I understood that to be because of the ability of the use of the modifiers (private, public, internal, protected). However, after learning about properties I am sort of torn in understanding not only properties uses, but the overall importance/ability of data protection (what I understood as encapsulation) within C#. To be more specific, everything I have read when I got to properties in C# is that you should try to use them in place of fields when you can because of: 1) they allow you to change the data type when you can't when directly accessing the field directly. 2) they add a level of protection to data access However, from what I 'thought' I had come to know about the use of field modifiers did #2, it seemed to me that properties just generated additional code unless you had some reason to change the type (#1) - because you are (more or less) creating hidden methods to access fields as opposed to directly. Then there is the whole modifiers being able to be added to Properties which further complicates my understanding for the need of properties to access data. I have read a number of chapters from different writers on "properties" and none have really explained a good understanding of properties vs. fields vs. encapsulation (and good programming methods). Can someone explain: 1) why I would want to use properties instead of fields (especially when it appears I am just adding additional code 2) any tips on recognizing the use of properties and not seeing them as simply methods (with the exception of the get;set being apparent) when tracing other peoples code? 3) Any general rules of thumb when it comes to good programming methods in relation to when to use what? Thanks and sorry for the long post - I didn't want to just ask a question that has been asked 100x without explaining why I am asking it again.

    Read the article

  • sql query - how to apply limit within group by

    - by Raj
    hey guys assuming i have a table named t1 with following fields: ROWID, CID, PID, Score, SortKey it has the following data: 1, C1, P1, 10, 1 2, C1, P2, 20, 2 3, C1, P3, 30, 3 4, C2, P4, 20, 3 5, C2, P5, 30, 2 6, C3, P6, 10, 1 7, C3, P7, 20, 2 what query do i write so that it applies group by on CID, but instead of returning me 1 single result per group, it returns me a max of 2 results per group. also where condition is score = 20 and i want the results ordered by CID and SortKey. If I had to run my query on above data, i would expect the following result: RESULTS FOR C1 - note: ROWID 1 is not considered as its score < 20 C1, P2, 20, 2 C1, P3, 30, 3 RESULTS FOR C2 - note: ROWID 5 appears before ROWID 4 as ROWID 5 has lesser value SortKey C2, P5, 30, 2 C2, P4, 20, 3 RESULTS FOR C3 - note: ROWID 6 does not appear as its score is less than 20 so only 1 record returned here C3, P7, 20, 2 IN SHORT, I WANT A LIMIT WITHIN A GROUP BY. I want the simplest solution and want to avoid temp tables. sub queries are fine. also note i am using sqlite for this

    Read the article

  • Can the Singleton be replaced by Factory?

    - by lostiniceland
    Hello Everyone There are already quite some posts about the Singleton-Pattern around, but I would like to start another one on this topic since I would like to know if the Factory-Pattern would be the right approach to remove this "anti-pattern". In the past I used the singleton quite a lot, also did my fellow collegues since it is so easy to use. For example, the Eclipse IDE or better its workbench-model makes heavy usage of singletons as well. It was due to some posts about E4 (the next big Eclipse version) that made me start to rethink the singleton. The bottom line was that due to this singletons the dependecies in Eclipse 3.x are tightly coupled. Lets assume I want to get rid of all singletons completely and instead use factories. My thoughts were as follows: hide complexity less coupling I have control over how many instances are created (just store the reference I a private field of the factory) mock the factory for testing (with Dependency Injection) when it is behind an interface In some cases the factories can make more than one singleton obsolete (depending on business logic/component composition) Does this make sense? If not, please give good reasons for why you think so. An alternative solution is also appreciated. Thanks Marc

    Read the article

  • Performance question: Inverting an array of pointers in-place vs array of values

    - by Anders
    The background for asking this question is that I am solving a linearized equation system (Ax=b), where A is a matrix (typically of dimension less than 100x100) and x and b are vectors. I am using a direct method, meaning that I first invert A, then find the solution by x=A^(-1)b. This step is repated in an iterative process until convergence. The way I'm doing it now, using a matrix library (MTL4): For every iteration I copy all coeffiecients of A (values) in to the matrix object, then invert. This the easiest and safest option. Using an array of pointers instead: For my particular case, the coefficients of A happen to be updated between each iteration. These coefficients are stored in different variables (some are arrays, some are not). Would there be a potential for performance gain if I set up A as an array containing pointers to these coefficient variables, then inverting A in-place? The nice thing about the last option is that once I have set up the pointers in A before the first iteration, I would not need to copy any values between successive iterations. The values which are pointed to in A would automatically be updated between iterations. So the performance question boils down to this, as I see it: - The matrix inversion process takes roughly the same amount of time, assuming de-referencing of pointers is non-expensive. - The array of pointers does not need the extra memory for matrix A containing values. - The array of pointers option does not have to copy all NxN values of A between each iteration. - The values that are pointed to the array of pointers option are generally NOT ordered in memory. Hopefully, all values lie relatively close in memory, but *A[0][1] is generally not next to *A[0][0] etc. Any comments to this? Will the last remark affect performance negatively, thus weighing up for the positive performance effects?

    Read the article

  • IO operation taking long time for files in remote server

    - by user841311
    I have files of size 150 MB each in a remote server in a different domain in the network. I am accessing them thorugh UNC path. I want to read the file content and perform a basic string search. When I try reading the files line by line, the operation just don't finish and takes long time, more than 30 minutes. However when I copy those files to my local machine, the same code reads and performs the string search in less than 5 seconds. I don't have .NET framework installed in the server so I have to do this from my machine. I want to perform all this through C# code in .NET framework 3.5 so I don't want to explictly ftp all the files to my machine before performing this operation. Sample Code DirectoryInfo dir = new DirectoryInfo(@strFilePath); FileInfo[] fiArray = dir.getFiles("*.txt"); foreach (FileInfo fi in fiArray) { //reading file content from server takes long time but fast in local machine //perform string search } Let me know if my requirement is not clear. Thanks in advance!

    Read the article

  • How do I create a good evaluation function for a new board game?

    - by A. Rex
    I write programs to play board game variants sometimes. The basic strategy is standard alpha-beta pruning or similar searches, sometimes augmented by the usual approaches to endgames or openings. I've mostly played around with chess variants, so when it comes time to pick my evaluation function, I use a basic chess evaluation function. However, now I am writing a program to play a completely new board game. How do I choose a good or even decent evaluation function? The main challenges are that the same pieces are always on the board, so a usual material function won't change based on position, and the game has been played less than a thousand times or so, so humans don't necessarily play it enough well yet to give insight. (PS. I considered a MoGo approach, but random games aren't likely to terminate.) Any ideas? Game details: The game is played on a 10-by-10 board with a fixed six pieces per side. The pieces have certain movement rules, and interact in certain ways, but no piece is ever captured. The goal of the game is to have enough of your pieces in certain special squares on the board. The goal of the computer program is to provide a player which is competitive with or better than current human players.

    Read the article

  • pagination in jsf

    - by gurupriyan.e
    I would like your comments and suggestion on this. I am doing the pagination for a page in jsf. The datatable is bound to a Backing Bean property through the "binding" attribute. I have 2 boolean variables to determine whether to render 'Prev' and 'Next' Button - which is displayed below the datatable. When either the 'Prev' or 'Next' button is clicked, In the backing bean I get the bound dataTable property and through which i get the "first" and "rows" attribute of the datatable and change accordingly. I display 5 rows in the page. Please comment and suggest if there any better ways. btw, I am not interested in any JSF Component libraries but stick to only core html render kit. public String goNext() { UIData htdbl = getBrowseResultsHTMLDataTable1(); setShowPrev(true); //set Rows "0" or "5" if(getDisplayResults().size() - (htdbl.getFirst() +5)>5 ) { htdbl.setRows(5);//display 5 rows }else if (getDisplayResults().size() - (htdbl.getFirst() +5)<=5) { htdbl.setRows(0);//display all rows (which are less than 5) setShowNext(false); } //set First htdbl.setFirst(htdbl.getFirst()+5); return "success"; } public String goPrev() { setShowNext(true); UIData htdbl = getBrowseResultsHTMLDataTable1(); //set First htdbl.setFirst(htdbl.getFirst()-5); if(htdbl.getFirst()==0) { setShowPrev(false); } //set Rows - always display 5 htdbl.setRows(5);//display 5 rows return "success"; }

    Read the article

  • Trouble with __VA_ARGS__

    - by Noah Roberts
    C++ preprocessor __VA_ARGS__ number of arguments The accepted answer there doesn't work for me. I've tried with MSVC++ 10 and g++ 3.4.5. I also crunched the example down into something smaller and started trying to get some information printed out to me in the error: template < typename T > struct print; #include <boost/mpl/vector_c.hpp> #define RSEQ_N 10,9,8,7,6,5,4,3,2,1,0 #define ARG_N(_1,_2,_3,_4,_5,_6,_7,_8,_9,_10,N,...) N #define ARG_N_(...) ARG_N(__VA_ARGS__) #define XXX 5,RSEQ_N #include <iostream> int main() { print< boost::mpl::vector_c<int, ARG_N_( XXX ) > > g; // ARG_N doesn't work either. } It appears to me that the argument for ARG_N ends up being 'XXX' instead of 5,RSEQ_N and much less 5,10,...,0. The error output of g++ more specifically says that only one argument is supplied. Having trouble believing that the answer would be proposed and then accepted when it totally fails to work, so what am I doing wrong? Why is XXX being interpreted as the argument and not being expanded? In my own messing around everything works fine until I try to pass off VA_ARGS to a macro containing some names followed by ... like so: #define WTF(X,Y,...) X , Y , __VA_ARGS__ #define WOT(...) WTF(__VA_ARGS__) WOT(52,2,5,2,2) I've tried both with and without () in the various macros that take no input.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

  • With Go, how to append unknown number of byte into a vector and get a slice of bytes?

    - by Stephen Hsu
    I'm trying to encode a large number to a list of bytes(uint8 in Go). The number of bytes is unknown, so I'd like to use vector. But Go doesn't provide vector of byte, what can I do? And is it possible to get a slice of such a byte vector? I intends to implement data compression. Instead of store small and large number with the same number of bytes, I'm implements a variable bytes that uses less bytes with small number and more bytes with large number. My code can not compile, invalid type assertion: 1 package main 2 3 import ( 4 //"fmt" 5 "container/vector" 6 ) 7 8 func vbEncodeNumber(n uint) []byte{ 9 bytes := new(vector.Vector) 10 for { 11 bytes.Push(n % 128) 12 if n < 128 { 13 break 14 } 15 n /= 128 16 } 17 bytes.Set(bytes.Len()-1, bytes.Last().(byte)+byte(128)) 18 return bytes.Data().([]byte) // <- 19 } 20 21 func main() { vbEncodeNumber(10000) } I wish to writes a lot of such code into binary file, so I wish the func can return byte array. I haven't find a code example on vector.

    Read the article

  • Strategy for WCF server with .Net clients and Android clients?

    - by D.H.
    I am using WCF to write a server that should be able to communicate with .Net clients, Android clients and possibly other types of clients. The main type of client is a desktop application that will be written in .Net. This client will usually be on the same intranet as the server. It will make an initial call to the server to get the current state of the system and will then receive updates from the server whenever a value changes. These updates are frequent, perhaps once a second. The Android clients will connect over the Internet. This client is also interested in updates, but it is not as critical as for the desktop client so a (less frequent) polling scenario might be acceptable. All clients will have to login to use the services, and when connecting over the Internet the connection should be secure. I am familiar with WCF but I am not sure what bindings are most appropriate for the scenario and what security solution to use. Also, I have not used Android, but I would like to make it as simple as possible for the person implementing the Android client to consume my services. So, what is my strategy?

    Read the article

  • Eclipse CDT setup for remote build

    - by Posco Grubb
    Is there a better way to setup Eclipse CDT for local editing and remote building? I am working on a C++ project that uses GNU make in Linux. The code is under CVS on a Linux server. When I'm in the lab, I use Eclipse CDT on a Linux-x64 PC. The project is built on a Linux-x86 PC. All the computers in the lab (including the CVS server) have NFS mounts. When I'm at home, I use Eclipse CDT on a Windows 7 PC. The Windows PC connects to the Linux CVS server via SSH tunnel. To edit source, I rsync the C++ project under the Linux Eclipse workspace back to my Windows Eclipse workspace. (I can also do a remote CVS checkout on the Windows PC.) To build from home, I use a custom build command that SSH's to the Linux-x86 PC, rsync's the C++ project from my Windows Eclipse workspace to my Linux Eclipse workspace, and then runs make on the Liunx-x86 PC, specifying the correct path for the Makefile. In order to go back and forth between lab and home without committing my changes to CVS every time, I use rsync. When I transition from lab to home, I rsync sources to my Windows Eclipse workspace. When I build from home, the sources get rsync'd back to the Linux Eclipse workspace. Is there a better, less wonky way to do this? (I'm NOT interested in remote debugging.)

    Read the article

  • Fast, very lightweight algorithm for camera motion detection?

    - by Ertebolle
    I'm working on an augmented reality app for iPhone that involves a very processor-intensive object recognition algorithm (pushing the CPU at 100% it can get through maybe 5 frames per second), and in an effort to both save battery power and make the whole thing less "jittery" I'm trying to come up with a way to only run that object recognizer when the user is actually moving the camera around. My first thought was to simply use the iPhone's accelerometers / gyroscope, but in testing I found that very often people would move the iPhone at a consistent enough attitude and velocity that there wouldn't be any way to tell that it was still in motion. So that left the option of analyzing the actual video feed and detecting movement in that. I got OpenCV working and tried running their pyramidal Lucas-Kanade optical flow algorithm, which works well but seems to be almost as processor-intensive as my object recognizer - I can get it to an acceptable framerate if I lower the depth levels / downsample the image / track fewer points, but then accuracy suffers and it starts to miss some large movements and trigger on small hand-shaking-y ones. So my question is, is there another optical flow algorithm that's faster than Lucas-Kanade if I just want to detect the overall magnitude of camera movement? I don't need to track individual objects, I don't even need to know which direction the camera is moving, all I really need is a way to feed something two frames of video and have it tell me how far apart they are.

    Read the article

  • Shape object in Processing, translate individual shapes.

    - by Zain
    I am relatively new to Processing but have been working in Java for about 2 years now. I am facing difficulty though with the translate() function for objects as well as objects in general in processing. I went through the examples and tried to replicate the manners by which they instantiated the objects but cannot seem to even get the shapes to appear on the screen no less move them. I instantiate the objects into an array using a nested for loop and expect a grid of the objects to be rendered. However, nothing at all is rendered. My nested for loop structure to instantiate the tiles: for(int i=0; i<102; i++){ for(int j=0; j<102; j++){ tiles[i][j]=new tile(i,0,j); tiles[i][j].display(); } } And the constructors for the tile class: tile(int x, int y, int z){ this.x=x; this.y=y; this.z=z; beginShape(); vertex(x,y,z); vertex(x+1,y,z); vertex(x+1,y,z-1); vertex(x,y,z-1); endShape(); } Nothing is rendered at all when this runs. Furthermore, if this is of any concern, my translations(movements) are done in a method I wrote for the tile class called move which simply calls translate. Is this the correct way? How should one approach this? I can't seem to understand at all how to render/create/translate individual objects/shapes. Thanks for any help any of you are able to provide!

    Read the article

  • Excel::Shape object getting released automatically after the count reaches 18 in List<T>

    - by A9S6
    I have a Excel addin written in C# 2.0 in which I am experiencing a strange behavior.Please note that this behavior is only seen in Excel 2003 and NOT in Excel 2007 or 2010. Issue: When the user clicks an import command button, a file is read and a number of Shapes are created/added to the worksheet using Worksheet::Shapes::AddPicture() method. A reference to these Shape objects are kept in a generic list: List<Excel.Shape> list = new List<Excel.Shape>(); Everything works fine till the list has less than 18 references. When the count reaches 18, and a new Shape reference is added, the first one i.e. @ index [0] is released. I am unable to call any method or property on that reference and calling a method/property throws a COMException (0x800A1A8) i.e. Object Required. If I add one more, then the reference @ [1] is not accessible and so on. Strange enough... this happens with Shape object only i.e. If I add one Shape and then 17 nulls to the list then this wont happen until 17 more Shape objects are added. Does anyone has an idea why it happens after the count reaches 18? I thought it might be something with the List's default capacity. Something like relocating the references during which they get released so I initialized it with a capacity of 1000 but still no luck. List<Excel.Shape> list = new List<Excel.Shape>(1000); Any Idea??

    Read the article

  • How to choose light version of databse system

    - by adopilot
    I am starting one POS (Point of sale) project. Targeting system is going to be written in C# .NET 2 WinForms and as main database server We are going to use MS-SQL Server. As we have a lot of POS devices in chain for one store I will love to have backend local data base system on each POS device. Scenario are following: When main server goes down!! POS application should continue working "off-line" with local database, until connection to main server come up again. Now I am in dilemma which local database is going to be most adoptable for me. Here is some notes for helping me point me in right direction: To be Light "My POS devices art usually old and suffering with performances" To be Free "I have a lot of devices and I do not wont additional cost beside main SQL serer" One day Ill love to try all that port on Mono and Linux OS. Here is what I've researched so far: Simple XML "Light but I am afraid of performance, My main table of items is average of 10K records" SQL-Expres "I am afraid that my POS devices is poor with hardware for SQLExpres, and also hard to install on each device and configure" Less known Advantage Database Server have free distribution of offline ADT system. DBF with extended Library,"Respect for good old DBFs but that era is behind Me with clipper and DBFs" MS Access Sqlite "Mostly like for now, but I am afraid how it is going to pair with MS SQL do they have same Data taypes". I know that in this SO is a lot of subjective data, but at least can someone recommended some others lite database system, or things that I shod most take attention before I choice database.

    Read the article

  • Apple's Sample App TopSongs has 26 Leaks, Ugh!

    - by RoLYroLLs
    Hey all, I've been building an app for a client and part of it uses Apple's TopSongs sample app to download data on another thread. I finally got enough done to start testing that part and found 1000 leaks!!! A closer look at the leaks made me check TopSongs for leaks, since none of the my methods were in leaks report. Running TopSongs returned 26 leaks. Not quite sure how to fix them, or if they are part of some library from Apple. I bet you're asking if it has 26, why do you have 1000? Well, I use their sample to make roughly 48 calls to webservices to get all the information needed on initial install (48 calls x 26 leaks = 1248 leaks!!). Later it makes at least 12 calls + 4 to check for updated information on other sections of the app. Can't do a thing about it, can't make one call, or less calls, please don't comment about this part. I seen people respond to posts that aren't necessarily answering the question the user originally posted, which in this case is has anyone tried patching up the leaks, if they are patchable, or is this a bug in Apple's libraries? Thanks so much.

    Read the article

  • Can I automatically throw descriptive exceptions with parameter values and class feild information?

    - by Robert H.
    I honestly don't throw exceptions often. I catch them even less, ironically. I currently work in shop where we let them bubble up to avicode. For whatever reason, however, avicode isn't configured to capture some of the critical bits I need when these exceptions come bouncing back to my attention. Specifically, I'd like to see the parameter values and the class’s field data at the time of the exception. I’d guess with the large suite of .Net services that I could create a static method to crawl up the stack, gather these bits and store them in a string that I could stick in my exception message. I really don't are how long such a method would take to execute as performance is no longer a concern when I hit one of these scenarios. If it's possible, I'm sure someone has done it. If that's the case, I'm having a hard time finding it. I think any search containing "exception" brings back too many resutls. Anyway, can this be done? If so, some examples or links would be great. Thanks in advance for your time, Robert

    Read the article

  • Custom Calculations in a Matrix - Reporting Services 2005

    - by bfrancis
    I am writing a report to show gas usage (in gallons) used by each department. The request is to view each month and the gallons used by each department. A column is required to display what each departments target goal is, based on the gallons of gas they have used in a past time frame. Each departments target goal is x percent less than the total gallons used for said time frame. I currently have a matrix in Reporting Services with departments making up rows, months making up columns, and gallons filling the details. The matrix is being filled by dataset1. I have the data grouping as is requested for each month by each department. My problem is calculating the target goal. My thought was to create a second dataset (dataset2) that returns the gallons used based on the time frame requested. I grouped this data by department. I was hoping I could use the department field in each dataset to make sure the appropriate numbers were used. I added a new column which shows up next to the gallons field. As I attempted to build the Expression I found out that I could only grab the gallons used from dataset2 if I was summing the gallons field. This gives me the total gallons used by every department combined. I have tried to find resources with similar examples of what I am trying to accomplish but I cannot seem to come across one. I am trying to keep this as detailed as possible without making it too wordy. I would be more than happy to clarify or explain into further detail what I have written above if it is needed. If anyone has links, comments, or suggestions they would be greatly appreciated. A very simple visual or what I am hoping to accomplish is below. The months and departments would expand based on the data returned. months ------------------------------ departments| gallons/month | target goal

    Read the article

  • How can I change some specific carps into croaks in Perl?

    - by sid_com
    I tried to catch a carp-warning: carp "$start is > $end" if (warnings::enabled()); ) with eval {} but it didn't work, so I looked in the eval documentation and I discovered, that eval catches only syntax-errors, run-time-errors or executed die-statements. How could I catch a carp warning? #!/usr/bin/env perl use warnings; use strict; use 5.012; use List::Util qw(max min); use Number::Range; my @array; my $max = 20; print "Input (max $max): "; my $in = <>; $in =~ s/\s+//g; $in =~ s/(?<=\d)-/../g; eval { my $range = new Number::Range( $in ); @array = sort { $a <=> $b } $range->range; }; if ( $@ =~ /\d+ is > \d+/ ) { die $@ }; # catch the carp-warning doesn't work die "Input greater than $max not allowed $!" if defined $max and max( @array ) > $max; die "Input '0' or less not allowed $!" if min( @array ) < 1; say "@array";

    Read the article

  • Runtime Exception when using Custom Healthmonitoring event in medium trust

    - by Elementenfresser
    Hi, I'm using custom healthmonitoring events in ASP.NET We recently moved to a new server with default High Trust Permissions. Literature says that healthmonitoring and custom events should work under Medium or higher trust (http://msdn.microsoft.com/en-us/library/bb398933.aspx). Problem is it doesn't. In less than high trust I get a SecurityException saying The application attempted to perform an operation not allowed by the security policy It works in Full trust or when I remove the inheritance of System.Web.Management.WebErrorEvent. Any suggestions anyone? Here is the super simple code behind with a custom event defined: public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { CallCustomEvent(); } catch (Exception ex) { Response.Write(ex.Message); throw ex; } } /// <summary> /// this metho is never called due to lacking permissions... /// </summary> private void CallCustomEvent() { try { //do something useful here } catch (Exception) { //code to instantiate the forbidden inheritance... WebBaseEvent.Raise(new CustomEvent()); } } } /// <summary> /// custom error inheriting WebErrorEvent which is not allowed in high trust? can't believe that... /// </summary> public class CustomEvent : WebErrorEvent { public CustomEvent() : base("test", HttpContext.Current.Request, 100001, new ApplicationException("dummy")) { } } and the Web Config excerpt for high trust: <system.web> <trust level="High" originUrl="" />

    Read the article

  • How do I combine two arrays in PHP based on a common key?

    - by Eoghan O'Brien
    Hi, I'm trying to join two associative arrays together based on an entry_id key. Both arrays come from individual database resources, the first stores entry titles, the second stores entry authors, the key=value pairs are as follows: array ( 'entry_id' => 1, 'title' => 'Test Entry' ) array ( 'entry_id' => 1, 'author_id' => 2 I'm trying to achieve an array structure like: array ( 'entry_id' => 1, 'author_id' => 2, 'title' => 'Test Entry' ) Currently, I've solved the problem by looping through each array and formatting the array the way I want, but I think this is a bit of a memory hog. $entriesArray = array(); foreach ($entryNames as $names) { foreach ($entryAuthors as $authors) { if ($names['entry_id'] === $authors['entry_id']) { $entriesArray[] = array( 'id' => $names['entry_id'], 'title' => $names['title'], 'author_id' => $authors['author_id'] ); } } } I'd like to know is there an easier, less memory intensive method of doing this?

    Read the article

  • Free Memory Occupied by Std List, Vector, Map etc

    - by Graviton
    Coming from a C# background, I have only vaguest idea on memory management on C++-- all I know is that I would have to free the memory manually. As a result my C++ code is written in such a way that objects of the type std::vector, std::list, std::map are freely instantiated, used, but not freed. I didn't realize this point until I am almost done with my programs, now my code is consisted of the following kinds of patterns: struct Point_2 { double x; double y; }; struct Point_3 { double x; double y; double z; }; list<list<Point_2>> Computation::ComputationJob(list<Point_3> pts3D, vector<Point_2> vectors) { map<Point_2, double> pt2DMap=ConstructPointMap(pts3D); vector<Point_2> vectorList = ConstructVectors(vectors); list<list<Point_2>> faceList2D=ConstructPoints(vectorList , pt2DMap); return faceList2D; } My question is, must I free every.single.one of the list usage ( in the above example, this means that I would have to free pt2DMap, vectorList and faceList2D)? That would be very tedious! I might just as well rewrite my Computation class so that it is less prone to memory leak. Any idea how to fix this?

    Read the article

  • How do I left join tables in unidirectional many-to-one in Hibernate?

    - by jbarz
    I'm piggy-backing off of http://stackoverflow.com/questions/2368195/how-to-join-tables-in-unidirectional-many-to-one-condition. If you have two classes: class A { @Id public Long id; } class B { @Id public Long id; @ManyToOne @JoinColumn(name = "parent_id", referencedColumnName = "id") public A parent; } B - A is a many to one relationship. I understand that I could add a Collection of Bs to A however I do not want that association. So my actual question is, Is there an HQL or Criteria way of creating the SQL query: select * from A left join B on (b.parent_id = a.id) This will retrieve all A records with a Cartesian product of each B record that references A and will include A records that have no B referencing them. If you use: from A a, B b where b.a = a then it is an inner join and you do not receive the A records that do not have a B referencing them. I have not found a good way of doing this without two queries so anything less than that would be great. Thanks.

    Read the article

  • Using diff and patch to force one local code base to look like another

    - by Dave Aaron Smith
    I've noticed this strange behavior of diff and patch when I've used them to force one code base to be identical to another. Let's say I want to update update_me to look identical to leave_unchanged. I go to update_me. I run a diff from leave_unchanged to update_me. Then I patch the diff into update_me. If there are new files in leave_unchanged, patch asks me if my patch was reversed! If I answer yes, it deletes the new files in leave_unchanged. Then, if I simply re-run the patch, it correctly patches update_me. Why does patch try to modify both leave_unchanged and update_me? What's the proper way to do this? I found a hacky way which is to replace all +++ lines with nonsense paths so patch can't find leave_unchanged. Then it works fine. It's such an ugly solution though. $ mkdir copyfrom $ mkdir copyto $ echo "Hello world" > copyfrom/myFile.txt $ cd copyto $ diff -Naur . ../copyfrom > my.diff $ less my.diff diff -Naur ./myFile.txt ../copyfrom/myFile.txt --- ./myFile.txt 1969-12-31 19:00:00.000000000 -0500 +++ ../copyfrom/myFile.txt 2010-03-15 17:21:22.000000000 -0400 @@ -0,0 +1 @@ +Hello world $ patch -p0 < my.diff The next patch would create the file ../copyfrom/myFile.txt, which already exists! Assume -R? [n] yes patching file ../copyfrom/myFile.txt $ patch -p0 < my.diff patching file ./myFile.txt

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >