Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 798/969 | < Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >

  • Protecting Content with AuthLogic

    - by Rob Wilkerson
    I know this sounds like a really, really simple use case and I'm hoping that it is, but I swear I've looked all over the place and haven't found any mention of any way - not even the best way - of doing this. I'm brand-spanking new to Ruby, Rails and everything surrounding either (which may explain a lot). The dummy app that I'm using as my learning tool requires authentication in order to do almost anything meaningful, so I chose to start by solving that problem. I've installed the AuthLogic gem and have it working nicely to the extent that is covered by the intro documentation and Railscast, but now that I can register, login and logout...I need to do something with it. As an example, I need to create a page where users can upload images. I'm planning to have an ImagesController with an upload action method, but I want that only accessible to logged in users. I suppose that in every restricted action I could add code to redirect if there's no current_user, but that seems really verbose. Is there a better way of doing this that allows me to define or identify restricted areas and handle the authentication check in one place?

    Read the article

  • When and why can sprintf fail?

    - by Srekel
    I'm using swprintf to build a string into a buffer (using a loop among other things). const int MaxStringLengthPerCharacter = 10 + 1; wchar_t* pTmp = pBuffer; for ( size_t i = 0; i < nNumPlayers ; ++i) { const int nPlayerId = GetPlayer(i); const int nWritten = swprintf(pTmp, MaxStringLengthPerCharacter, TEXT("%d,"), nPlayerId); assert(nWritten >= 0 ); pTmp += nWritten; } *pTaskPlayers = '\0'; If during testing the assert never hits, can I be sure that it will never hit in live code? That is, do I need to check if nWritten < 0 and handle that, or can I safely assume that there won't be a problem? Under which circumstances can it return -1? The documentation more or less just states "If the function fails". In one place I've read that it will fail if it can't match the arguments (i.e. the formatting string to the varargs) but that doesn't worry me. I'm also not worried about buffer overrun in this case - I know the buffer is big enough.

    Read the article

  • What does it mean to double license?

    - by Adrian Panasiuk
    What does it mean to double license code? I can't just put both licenses in the source files. That would mean that I mandate users to follow the rules of both of them, but the licenses will probably be contradictory (otherwise there'd be no reason to double license). I guess this is something like in cryptographic chaining, cipher = crypt_2(crypt_1(clear)) (generally) means, that cipher is neither the output of crypt_2 on clear nor the output of crypt_1 on clear. It's the output of the composition. Likewise, in double-licensing, in reality my code has one license, it's just that this new license says please follow all of the rules of license1, or all of the rules of license2, and you are hereby granted the right to redistribute this application under this "double" license, license1 or license2, or any license under which license1 or license2 allow you to redistribute this software, in which case you shall replace the relevant licensing information in this application with that of the new license. (Does this mean that before someone may use the app under license1, he has to perform the operation of redistributing to self? How would he document the fact that he did that operation?) Am I correct. What LICENSE file and what text to put in the source files would I need if I wanted to double license on, for the sake of example, Apachev2 and GPLv3 ?

    Read the article

  • handling pointer to member functions within hierachy in C++

    - by anatoli
    Hi, I'm trying to code the following situation: I have a base class providing a framework for handling events. I'm trying to use an array of pointer-to-member-functions for that. It goes as following: class EH { // EventHandler virtual void something(); // just to make sure we get RTTI public: typedef void (EH::*func_t)(); protected: func_t funcs_d[10]; protected: void register_handler(int event_num, func_t f) { funcs_d[event_num] = f; } public: void handle_event(int event_num) { (this->*(funcs_d[event_num]))(); } }; Then the users are supposed to derive other classes from this one and provide handlers: class DEH : public EH { public: typedef void (DEH::*func_t)(); void handle_event_5(); DEH() { func_t f5 = &DEH::handle_event_5; register_handler(5, f5); // doesn't compile ........ } }; This code wouldn't compile, since DEH::func_t cannot be converted to EH::func_t. It makes perfect sense to me. In my case the conversion is safe since the object under this is really DEH. So I'd like to have something like that: void EH::DEH_handle_event_5_wrapper() { DEH *p = dynamic_cast<DEH *>(this); assert(p != NULL); p->handle_event_5(); } and then instead of func_t f5 = &DEH::handle_event_5; register_handler(5, f5); // doesn't compile in DEH::DEH() put register_handler(5, &EH::DEH_handle_event_5_wrapper); So, finally the question (took me long enough...): Is there a way to create those wrappers (like EH::DEH_handle_event_5_wrapper) automatically? Or to do something similar? What other solutions to this situation are out there? Thanks.

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • Lucene Query Syntax

    - by Don
    Hi, I'm trying to use Lucene to query a domain that has the following structure Student 1-------* Attendance *---------1 Course The data in the domain is summarised below Course.name Attendance.mandatory Student.name ------------------------------------------------- cooking N Bob art Y Bob If I execute the query "courseName:cooking AND mandatory:Y" it returns Bob, because Bob is attending the cooking course, and Bob is also attending a mandatory course. However, what I really want to query for is "students attending a mandatory cooking course", which in this case would return nobody. Is it possible to formulate this as a Lucene query? I'm actually using Compass, rather than Lucene directly, so I can use either CompassQueryBuilder or Lucene's query language. For the sake of completeness, the domain classes themselves are shown below. These classes are Grails domain classes, but I'm using the standard Compass annotations and Lucene query syntax. @Searchable class Student { @SearchableProperty(accessor = 'property') String name static hasMany = [attendances: Attendance] @SearchableId(accessor = 'property') Long id @SearchableComponent Set<Attendance> getAttendances() { return attendances } } @Searchable(root = false) class Attendance { static belongsTo = [student: Student, course: Course] @SearchableProperty(accessor = 'property') String mandatory = "Y" @SearchableId(accessor = 'property') Long id @SearchableComponent Course getCourse() { return course } } @Searchable(root = false) class Course { @SearchableProperty(accessor = 'property', name = "courseName") String name @SearchableId(accessor = 'property') Long id }

    Read the article

  • What if a large number of objects are passed to my SwingWorker.process() method?

    - by Trejkaz
    I just found an interesting situation. Suppose you have some SwingWorker (I've made this one vaguely reminiscent of my own): public class AddressTreeBuildingWorker extends SwingWorker<Void, NodePair> { private DefaultTreeModel model; public AddressTreeBuildingWorker(DefaultTreeModel model) { } @Override protected Void doInBackground() { // Omitted; performs variable processing to build a tree of address nodes. } @Override protected void process(List<NodePair> chunks) { for (NodePair pair : chunks) { // Actually the real thing inserts in order. model.insertNodeInto(parent, child, parent.getChildCount()); } } private static class NodePair { private final DefaultMutableTreeNode parent; private final DefaultMutableTreeNode child; private NodePair(DefaultMutableTreeNode parent, DefaultMutableTreeNode child) { this.parent = parent; this.child = child; } } } If the work done in the background is significant then things work well - process() is called with relatively small lists of objects and everything is happy. Problem is, if the work done in the background is suddenly insignificant for whatever reason, process() receives a huge list of objects (I have seen 1,000,000, for instance) and by the time you process each object, you have spent 20 seconds on the Event Dispatch Thread, exactly what SwingWorker was designed to avoid. In case it isn't clear, both of these occur on the same SwingWorker class for me - it depends on the input data, and the type of processing the caller wanted. Is there a proper way to handle this? Obviously I can intentionally delay or yield the background processing thread so that a smaller number might arrive each time, but this doesn't feel like the right solution to me.

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • Jquery .Show() .Hide() not working as expected

    - by fizgig07
    I'm trying to use the show and hide to display a different set of select options when a certain report type is selected. I have a couple problems with this: The .show .hide only execute properly if I pass params, slow fast, in the the first result of my conditional statement. If I take out the params or pass params in both results, only one select shows and it never changes.. here's is the code that currently kind of works. if ($('#ReportType').val() == 'PbuseExport') { $('#PbuseServices').show('fast'); $('#ReportServiceDropdown').hide('fast'); } else { $('#PbuseServices').hide(); $('#ReportServiceDropdown').show(); } After i've used this control I am taken to a differnt page. When I use the control again, it reatins the original search values and repopulates the control. Then again I only want to show one select option if a certain report is chosen.. This works correctly if the report type I originally searched on is not the "PbuseExport". If I searched on the report type "PbuseExport", then both selects show on the screen, and only until I change the report type does it show only one select. I know this probably isn't very clear.. Here is the code that handles the change event on the report type drop down. var serviceValue = $("#ReportType").val(); switch (serviceValue) { case 'PbuseExport': $('#PbuseServices').show('fast'); $('#ReportServiceDropdown').hide('fast'); default: $('#PbuseServices').hide(); $('#ReportServiceDropdown').show(); break; }

    Read the article

  • VB.NET: Dialog exits when enter pressed?

    - by Camilo Martin
    Hi all; My problem seems to be quite simple, but it's not working the intuitive way. I'm designing a Windows Forms Application, and there is a dialog that should NOT exit when the enter key is pressed, instead it has to validate data first, in case enter was pressed after changing the text of a ComboBox. I've tried by telling it what to do on KeyPress event of the ComboBox if e is the Enter key: Private Sub ComboBoxSizeChoose_KeyPress(ByVal sender As System.Object, ByVal e As System.Windows.Forms.KeyPressEventArgs) Handles ComboBoxSizeChoose.KeyPress If e.KeyChar = Convert.ToChar(Keys.Enter) Then Try TamanhoDaNovaFonte = Single.Parse(ComboBoxSizeChoose.Text) Catch ex As Exception Dim Dialogo2 As New Dialog2 Dialog2.ShowDialog() ComboBoxSizeChoose.Text = TamanhoDaNovaFonte End Try End If End Sub But no success so far. When the Enter key is pressed, even with the ComboBox on focus, the whole dialog is closed, returning to the previous form. The validation is NOT done at all, and it has to be done before exiting. In fact, I don't even want to exit on the form's enter KeyPress, the only purpose of the enter key on the whole dialog is to validate the ComboBox (but only when in focus, for the sake of an intuitive UI). I've also tried appending the validation to the KeyPress event of the whole dialog's form, if the key is Enter. NO SUCCESS! It's like my code wasn't there at all. What should I do? (Visual Studio 2008, VB.NET)

    Read the article

  • How to send Event signal through Processes - C

    - by Jamie Keeling
    Hello all! I have an application consisting of two windows, one communicates to the other and sends it a struct constaining two integers (In this case two rolls of a dice). I will be using events for the following circumstances: Process a sends data to process b, process b displays data Process a closes, in turn closing process b Process b closes a, in turn closing process a I have noticed that if the second process is constantly waiting for the first process to send data then the program will be just sat waiting, which is where the idea of implementing threads on each process occurred and I have started to implement this already. The problem i'm having is that I don't exactly have a lot of experience with threads and events so I'm not sure of the best way to actually implement what I want to do. I'm trying to work out how the other process will know of the event being fired so it can do the tasks it needs to do, I don't understand how one process that is separate from another can tell what the states the events are in especially as it needs to act as soon as the event has changed state. Thanks for any help Edit: I can only use the Create/Set/Open methods for events, sorry for not mentioning it earlier.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Do variable references (alias) incure runtime costs in c++?

    - by cheshirekow
    Maybe this is a compiler specific thing. If so, how about for gcc (g++)? If you use a variable reference/alias like this: int x = 5; int& y = x; y += 10; Does it actually require more cycles than if we didn't use the reference. int x = 5; x += 10; In other words, does the machine code change, or does the "alias" happen only at the compiler level? This may seem like a dumb question, but I am curious. Especially in the case where maybe it would be convenient to temporarily rename some member variables just so that the math code is a little easier to read. Sure, we're not exactly talking about a bottleneck here... but it's something that I'm doing and so I'm just wondering if there is any 'actual' difference... or if it's only cosmetic.

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • LINQ to Entities question about orderby and null collections.

    - by Chevex
    I am currently developing a forum. I am new to LINQ and EF. In my forum I have a display that shows a list of topics with the most recent topics first. The problem is that "most recent" is relative to the topic's replies. So I don't want to order the list by the topic's posted date, rather I want to order the list by the topic's last reply's posted date. So that topics with newer replies pop back to the top of the list. This is rather simple if I knew that every topic had at least one reply; I would just do this: var topicsQuery = from x in board.Topics orderby x.Replies.Last().PostedDate descending select x; However, in many cases the topic has no replies. In which case I would like to use the topic's posted date instead. Is there a way within my linq query to order by x.PostedDate in the event that the topic has no replies? I'm getting confused by this and any help would be appreciated. With the above query, it breaks on topics with no replies because of the x.Replies.Last() which assumes there are replies. LastOrDefault() doesn't work because I need to access the PostedDate property which also assumes a reply exists. Thanks in advance for any insight.

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • java servlet:response.sendRedirect() not giving illegal state exception if called after commit of re

    - by sahil garg
    after commit of response as here redirect statement should give exception but it is not doing so if this redirect statemnet is in if block.but it does give exception in case it is out of if block.i have shown same statement(with marked stars ) at two places below.can u please tell me reason for it. protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // TODO Auto-generated method stub synchronized (noOfRequests) { noOfRequests++; } PrintWriter pw=null; response.setContentType("text/html"); response.setHeader("foo","bar"); //response is commited because of above statement pw=response.getWriter(); pw.print("hello : "+noOfRequests); //if i remove below statement this same statement is present in if block.so statement in if block should also give exception as this one do, but its not doing so.why? ***response.sendRedirect("http://localhost:8625/ServletPrc/login% 20page.html"); if(true) { //same statement as above ***response.sendRedirect("http://localhost:8625/ServletPrc/login%20page.html"); } else{ request.setAttribute("noOfReq", noOfRequests); request.setAttribute("name", new Name().getName()); request.setAttribute("GmailId",this.getServletConfig().getInitParameter("GmailId") ); request.setAttribute("YahooId",this.getServletConfig().getInitParameter("YahooId") ); RequestDispatcher view1=request.getRequestDispatcher("HomePage.jsp"); view1.forward(request, response); } }

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Using complex where clause in NHibernate mapping layer

    - by JLevett
    I've used where clauses previously in the mapping layer to prevent certain records from ever getting into my application at the lowest level possible. (Mainly to prevent having to re-write lots of lines of code to filter out the unwanted records) These have been simple, one column queries, like so this.Where("Invisible = 0"); However a scenario has appeared which requires the use of an exists sql query. exists (select ep_.Id from [Warehouse].[dbo].EventPart ep_ where Id = ep_.EventId and ep_.DataType = 4 In the above case I would usually reference the parent table Event with a short name, i.e. event_.Id however as Nhibernate generates these short names dynamically it's impossible to know what it's going to be. So instead I tried using just Id, from above ep_ where Id = ep_.EventId When the code is run, because of the dynamic short names the EventPart table short name ep_ is has another short name prefixed to it, event0_.ep_ where event0_ refers to the parent table. This causes an SQL error because of the . in between event0_ and ep_ So in my EventMap I have the following this.Where("(exists (select ep_.Id from [isnapshot.Warehouse].[dbo].EventPart ep_ where Id = ep_.EventId and ep_.DataType = 4)"); but when it's generated it creates this select cast(count(*) as INT) as col_0_0_ from [isnapshot.Warehouse].[dbo].Event event0_ where (exists (select ep_.Id from [isnapshot.Warehouse].[dbo].EventPart event0_.ep_ where event0_.Id = ep_.EventId and ep_.DataType = 4) It has correctly added the event0_ to the Id Was the mapping layer where clause built to handle this and if so where am I going wrong?

    Read the article

  • Very strange Application.ThreadException behaviour.

    - by Brann
    I'm using the Application.ThreadException event to handle and log unexpected exceptions in my winforms application. Now, somewhere in my application, I've got the following code (or rather something equivalent, but this dummy code is enough to reproduce my issue) : try { throw new NullReferenceException("test"); } catch (Exception ex) { throw new Exception("test2", ex); } I'm clearly expecting my Application_ThreadException handler to be passed the "test2" exception, but this is not always the case. Typically, if another thread marshals my code to the UI, my handler receives the "test" exception, exactly as if I hadn't caught "test" at all. Here is a short sample reproducing this behavior. I have omitted the designer's code. static class Program { [STAThread] static void Main() { Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(Application_ThreadException); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form1()); } static void Application_ThreadException(object sender, System.Threading.ThreadExceptionEventArgs e) { Console.WriteLine(e.Exception.Message); } } public partial class Form1 : Form { public Form1() { InitializeComponent(); button1.Click+=new EventHandler(button1_Click); System.Threading.Thread t = new System.Threading.Thread(new System.Threading.ThreadStart(ThrowEx)); t.Start(); } private void button1_Click(object sender, EventArgs e) { try { throw new NullReferenceException("test"); } catch (Exception ex) { throw new Exception("test2", ex); } } void ThrowEx() { this.BeginInvoke(new EventHandler(button1_Click)); } } The output of this program on my computer is : test ... here I click button1 test2 I've reproduced this on .net 2.0,3.5 and 4.0. Does someone have a logical explanation ?

    Read the article

  • Getting started with open source

    - by lola
    Hi all, I'm an undergraduate who has decided that he wants to join the open source community and contribute. However, I have come to think that, once you have chosen an open source project, a lot of time is spent in learning the nitty gritties of that project in addition to stuff like subversion,etc which a typical undergraduate isn't exposed to. So, you have to stick with that project for a long time, say a year or two, before moving on to other projects. In this case, choosing the right(for you) initial project is very important since if you choose one,and say, the development in your field of interest(in that project) is a low priority and not exciting enough, you'll lose interest and stop contributing to open source all together. So what I wanted to know was, since there are thousands of open source projects, is all this being documented somewhere with tags,etc so that a beginner can choose his projects. The GSoc 2010 ideas list is a great starting point, but it only covers a handful. Hence, I thought why not ask this at stackoverflow: if you have any pointers as to where to start, when choosing a FOSS project or any other tips related to starting with FOSS. P.S. I'm interested in projects involving mobile ad hoc networks(those using TinyOS, preferably), so pointers related to these will be great. I'm looking through Freifunk and OLPC as of now, needed more ideas.

    Read the article

  • Thread Safety of C# List<T> for readers

    - by ILIA BROUDNO
    I am planning to create the list once in a static constructor and then have multiple instances of that class read it (and enumerate through it) concurrently without doing any locking. In this article http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx MS describes the issue of thread safety as follows: Public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. A List can support multiple readers concurrently, as long as the collection is not modified. Enumerating through a collection is intrinsically not a thread-safe procedure. In the rare case where an enumeration contends with one or more write accesses, the only way to ensure thread safety is to lock the collection during the entire enumeration. To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization. The "Enumerating through a collection is intrinsically not a thread-safe procedure." Statement is what worries me. Does this mean that it is thread safe for readers only scenario, but as long as you do not use enumeration? Or is it safe for my scenario?

    Read the article

  • Does unboxing just return a pointer to the value within the boxed object on the heap?

    - by Charles
    I this MSDN Magazine article, the author states (emphasis mine): Note that boxing always creates a new object and copies the unboxed value's bits to the object. On the other hand, unboxing simply returns a pointer to the data within a boxed object: no memory copy occurs. However, it is commonly the case that your code will cause the data pointed to by the unboxed reference to be copied anyway. I'm confused by the sentence I've bolded and the sentence that follows it. From everything else I've read, including this MSDN page, I've never before heard that unboxing just returns a pointer to the value on the heap. I was under the impression that unboxing would result in you having a variable containing a copy of the value on the stack, just as you began with. After all, if my variable contains "a pointer to the value on the heap", then I haven't got a value type, I've got a pointer. Can someone explain what this means? Was the author on crack? (There is at least one other glaring error in the article). And if this is true, what are the cases where "your code will cause the data pointed to by the unboxed reference to be copied anyway"? I just noticed that the article is nearly 10 years old, so maybe this is something that changed very early on in the life of .Net.

    Read the article

  • Telerik RadGrid: grid clientside pagination

    - by ram
    I have a web service which returns me some data,I am massaging this data and using this as datasource for my radgrid (telerik). The datasource is quite large, and would like to paginate it. I found couple of problems when I paginate it in the server side I have to bind the grid again for pagination, which essentially means I have to make a call to WS again to get the data. This is an expensive call for me. I would rather forgo the benefits of pagination and would display all the results in the same page, except for it would be a bit clumsy During the postback RadGrid1.Items.Count happens to be the number of items getting paginated (25- in my case) which is expected as all the items in the datasource are not getting bound. This of course is not an issue. The real issue is that we have some checkboxes which get checked based on some business condition. We add this to our business object/DB later. So if the user has not navigated all the pages, these "checked" items do not get added as pagination limits the "Items" in the grid to those which get bound for that particular page index. My Thoughts: I would rather have some sort of client side pagination, where we can hide/show contents than going to the server and doing a databind every time. Though it will return all the results, the UI will not be clumsy and the grid would have "all the items" during postback Is there a way to do it ? If it were a regular asp.net gridView, can someone point me to a good article which would serve my purpose Ram PS: who else think radgrid is crazy ? (unfortunately I did not make this choice)

    Read the article

  • Different standard streams per POSIX thread

    - by Roman Nikitchenko
    Is there any possibility to achieve different redirections for standard output like printf(3) for different POSIX thread? What about standard input? I have lot of code based on standard input/output and I only can separate this code into different POSIX thread, not process. Linux operation system, C standard library. I know I can refactor code to replace printf() to fprintf() and further in this style. But in this case I need to provide some kind of context which old code doesn't have. So doesn't anybody have better idea (look into code below)? #include <pthread.h> #include <stdio.h> void* different_thread(void*) { // Something to redirect standard output which doesn't affect main thread. // ... // printf() shall go to different stream. printf("subthread test\n"); return NULL; } int main() { pthread_t id; pthread_create(&id, NULL, different_thread, NULL); // In main thread things should be printed normally... printf("main thread test\n"); pthread_join(id, NULL); return 0; }

    Read the article

< Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >