Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 484/555 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • How to cast sockaddr_storage and avoid breaking strict-aliasing rules

    - by sinoth
    I'm using Beej's Guide to Networking and came across an aliasing issue. He proposes a function to return either the IPv4 or IPv6 address of a particular struct: 1 void *get_in_addr( struct sockaddr *sa ) 2 { 3 if (sa->sa_family == AF_INET) 4 return &(((struct sockaddr_in*)sa)->sin_addr); 5 else 6 return &(((struct sockaddr_in6*)sa)->sin6_addr); 7 } This causes GCC to spit out a strict-aliasing error for sa on line 3. As I understand it, it is because I call this function like so: struct sockaddr_storage their_addr; ... inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), connection_name, sizeof connection_name); I'm guessing the aliasing has to do with the fact that the their_addr variable is of type sockaddr_storage and another pointer of a differing type points to the same memory. Is the best way to get around this sticking sockaddr_storage, sockaddr_in, and sockaddr_in6 into a union? It seems like this should be well worn territory in networking, I just can't find any good examples with best practices. Also, if anyone can explain exactly where the aliasing issue takes place, I'd much appreciate it.

    Read the article

  • JConsole does not connect when a program (running in eclipse) is waiting for a console input

    - by Calm Storm
    Hi, I have a problem similar to the one mentioned here. I have a program which launches a selenium browser and refreshes it every 2 seconds. My idea here was to monitor the memory usage of my application as the page is being hit. This console program does a "System.in.read" and when a user presses enter key, this stops the browser refresh thread. I run the program and it runs as I would expect it to. But when I fire jconsole and attach it to the process, it looks like jconsole waits forever to attach. This is because of the "System.in.read", apparently as shown in the JConsole source. In my code it makes perfect sense for me to terminte the browser when user presses something so is there any alternative available at all ? (Posted code sample below) package com.ekanathk; import java.util.ArrayList; import java.util.List; import org.junit.Test; public class MultiThread { @Test public void testSimple() throws Exception { List<Integer> x = new ArrayList<Integer>(); for(int i = 0; i < 1000; i++) { x.add(i); } System.out.print("Press Enter to stop (try attach a jconsole now...."); System.in.read(); } } The problem here seems to be that any System.in.read (either in Main thread or any threads) seems to completely block jconsole. I guess the problem essentially boils down to multiple threads requesting system inputs at the same time? Is there a solution ? UPDATE 1: It seems this works fine when launching the class from a command line, however when I run from eclipse, jconsole cant connect. Any reasons ?

    Read the article

  • How to get the "real" colors when exporting a GDI drawing to bmp

    - by Rodrigo
    Hi guys I'm developing a ASP.Net web handler that returns images making a in-memory System.Windows.Forms.Control and then exports the rendered control as a bitmap compressed to PNG, using the method DrawToBitmap(). The classes are fully working by now, except for a problem with the color assignment. By example, this is a Gauge generated by the web handler. The colors assigned to the inner parts of the gauge are red (#FF0000), yellow (#FFFF00) and green (#00FF00) but I only get a dully version of each color (#CB100F for red, #CCB70D for yellow and #04D50D for green). The background is a bmp file containing the color gradient. The color loss happens whether the background is a gradient as the sample, a black canvas, a white canvas, a transparent canvas, even when there is not a background set. With black background With transparent background With a white background Without a background set With pixel format in Format32bppArgb I've tried multiple bitmap color deeps, output formats, compression level, etc. but none of them seems to work. Any ideas? This is a excerpt of the source code: Bitmap bmp = new Bitmap(w, h, System.Drawing.Imaging.PixelFormat.Format32bppPArgb); Image bgimage = (Image) HttpContext.GetGlobalResourceObject("GraphicResources", "GaugeBackgroundImage"); Canvas control_canvas = new Canvas(); .... //the routine that makes the gauge .... control_canvas.DrawToBitmap(bmp, new Rectangle(0, 0, w, h));

    Read the article

  • Creating a DataTable by filtering another DataTable

    - by Jeff Dege
    I'm working on a system that currently has a fairly complicated function that returns a DataTable, which it then binds to a GUI control on a ASP.NET WebForm. My problem is that I need to filter the data returned - some of the data that is being returned should not be displayed to the user. I'm aware of DataTable.select(), but that's not really what I need. First, it returns an array of DataRows, and I need a DataTable, so I can databind it to the GUI control. But more importantly, the filtering I need to do isn't something that can be easily put into a simple expression. I have an array of the elements which I do not want displayed, and I need to compare each element from the DataTable against that array. What I could do, of course, is to create a new DataTable, reading everything out of the original, adding to the new what is appropriate, then databinding the new to the GUI control. But that just seems wrong, somehow. In this case, the number of elements in the original DataTable aren't likely to be enough that copying them all in memory is going to cause too much trouble, but I'm wondering if there is another way. Does the .NET DataTable have functionality that would allow me to filter via a callback function?

    Read the article

  • Is there a PHP benchmark that meets these specific criteria? [closed]

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The PHP source needs to be in a single file. It should solve a real-world problem. No silly looping over empty methods etc. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • should std::auto_ptr<>::operator = reset / deallocate its existing pointee ?

    - by afriza
    I read here about std::auto_ptr<::operator= Notice however that the left-hand side object is not automatically deallocated when it already points to some object. You can explicitly do this by calling member function reset before assigning it a new value. However, when I read the source code for header file C:\Program Files\Microsoft Visual Studio 8\VC\ce\include\memory template<class _Other> auto_ptr<_Ty>& operator=(auto_ptr<_Other>& _Right) _THROW0() { // assign compatible _Right (assume pointer) reset(_Right.release()); return (*this); } auto_ptr<_Ty>& operator=(auto_ptr<_Ty>& _Right) _THROW0() { // assign compatible _Right (assume pointer) reset(_Right.release()); return (*this); } auto_ptr<_Ty>& operator=(auto_ptr_ref<_Ty> _Right) _THROW0() { // assign compatible _Right._Ref (assume pointer) _Ty **_Pptr = (_Ty **)_Right._Ref; _Ty *_Ptr = *_Pptr; *_Pptr = 0; // release old reset(_Ptr); // set new return (*this); } What is the correct/standard behavior? How do other STL implementations behave?

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

  • Please critisize this method

    - by Jakob
    Hi I've been looking around the net for some tab button close functionality, but all those solutions had some complicated eventhandler, and i wanted to try and keep it simple, but I might have broken good code ethics doing so, so please review this method and tell me what is wrong. public void AddCloseItem(string header, object content){ //Create tabitem with header and content StackPanel headerPanel = new StackPanel() { Orientation = Orientation.Horizontal, Height = 14}; headerPanel.Children.Add(new TextBlock() { Text = header }); Button closeBtn = new Button() { Content = new Image() { Source = new BitmapImage(new Uri("images/cross.png", UriKind.Relative)) }, Margin = new Thickness() { Left = 10 } }; headerPanel.Children.Add(closeBtn); TabItem newTabItem = new TabItem() { Header = headerPanel, Content = content }; //Add close button functionality closeBtn.Tag = newTabItem; closeBtn.Click += new RoutedEventHandler(closeBtn_Click); //Add item to list this.Add(newTabItem); } void closeBtn_Click(object sender, RoutedEventArgs e) { this.Remove((TabItem)((Button)sender).Tag); } So what I'm doing is storing the tabitem in the btn.Tag property, and then when the button is clicked i just remove the tabitem from my observablecollection, and the UI is updated appropriately. Am I using too much memory saving the tabitem to the Tag property?

    Read the article

  • Best practices for Java logging from multiple threads?

    - by Jason S
    I want to have a diagnostic log that is produced by several tasks managing data. These tasks may be in multiple threads. Each task needs to write an element (possibly with subelements) to the log; get in and get out quickly. If this were a single-task situation I'd use XMLStreamWriter as it seems like the best match for simplicity/functionality without having to hold a ballooning XML document in memory. But it's not a single-task situation, and I'm not sure how to best make sure this is "threadsafe", where "threadsafe" in this application means that each log element should be written to the log correctly and serially (one after the other and not interleaved in any way). Any suggestions? I have a vague intuition that the way to go is to use a queue of log elements (with each one able to be produced quickly: my application is busy doing real work that's performance-sensitive), and have a separate thread which handles the log elements and sends them to a file so the logging doesn't interrupt the producers. The logging doesn't necessarily have to be XML, but I do want it to be structured and machine-readable. edit: I put "threadsafe" in quotes. Log4j seems to be the obvious choice (new to me but old to the community), why reinvent the wheel...

    Read the article

  • Why do i get exc bad access in cases when object is not nil?

    - by DixieFlatline
    I have an app that receives remote notifications. My view controller that is shown after push has a tableview. App crashes very randomly (1 in 20 tries) at line setting frame: if (!myTableView) { NSLog(@"self.myTableView is nil"); } myTableView.frame=CGRectMake(0, 70, 320, 376); This only happens when i open the app, then open some other apps and then receive the push notification. I guess it has something to do with memory. I use ARC (ios 5). The strange thing is that nslog is not displayed, so tableview is not nil. Crash log: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x522d580c Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 libobjc.A.dylib 0x352b1f7e objc_msgSend + 22 1 Foundation 0x37dc174c NSKVOPendingNotificationCreate + 216 2 Foundation 0x37dc1652 NSKeyValuePushPendingNotificationPerThread + 62 3 Foundation 0x37db3744 NSKeyValueWillChange + 408 4 Foundation 0x37d8a848 -[NSObject(NSKeyValueObserverNotification) willChangeValueForKey:] + 176 5 Foundation 0x37e0ca14 _NSSetPointValueAndNotify + 76 6 UIKit 0x312af25a -[UIScrollView(Static) _adjustContentOffsetIfNecessary] + 1890 7 UIKit 0x312cca54 -[UIScrollView setFrame:] + 548 8 UIKit 0x312cc802 -[UITableView setFrame:] + 182 9 POViO 0x000913cc -[FeedVC viewWillAppear:] (FeedVC.m:303) Dealloc is not called because it is not logged: - (void)dealloc { NSLog(@"dealloc"); }

    Read the article

  • Windows console

    - by b-gen-jack-o-neill
    Hello. Well, I have a simple question, at least I hope its simple. I was interested in win32 console for a while. Our teacher told us, that windows console is just for DOS and real mode emulation purposes. Well, I know it is not true, becouse DOS applications are runned by emulator which only uses console to display output. Another thing I learned is that console is built into Windows since NT. Well. But what I could not find is, how actually are console programs written to use console. I use Visual C++ for programming (well, for learning). So, the only thing I need to do for using console is select console project. I first thought that windows decides wheather it run app in console or tries to run app in window mode. So I created win32 program and tried printf(). Well, I could not compile it. I know that by definition printf() prints text or variables to stdout. I also found that stdout is the console interface for output. But, I could not find what actually stdout is. So, basicly what I want to ask is, where is the difference between console app and win32 app. I thought that windows starts console when it gets command from "console-family" functions. But obvisously it does not, so there must be some code that actually commands windows to create console interface. And the second question is, when the console is created, how does windows recognize which console terminal is used for what app? I mean, what actually is stdout? Is it a area in memory , or some windows routine that is called? Thanks.

    Read the article

  • asp.net, how to change text color to red in child nodes in VB codes?

    - by StudentIT
    I got everything worked now by list of server Name but I want to add a IF statement by checking a column from SQL Server called Compliance by either True or False value listed. If it False, the Name will change text color to Red. If it True, the Name won't change text color. I am not sure how add that in VB codes side. I am pretty sure that I would need to put IF statement inside While dr.Read(). I am pretty new to VB.Net and not sure which VB code that change text color. Here is my VB codes, Sub loadData() 'clear treeview control TreeViewGroups.Nodes.Clear() 'fetch owner data and save to in memory table Dim sqlConn As New System.Data.SqlClient.SqlConnection((ConfigurationManager.ConnectionStrings("SOCT").ConnectionString)) Dim strSqlSecondary As String = "SELECT [Name] FROM [dbo].[ServerOwners] where SecondaryOwner like @uid order by [name]" 'Getting a list of True or False from Compliance column Dim strSqlCompliance As String = "SELECT [Compliance] FROM [dbo].[ServerOwners] where SecondaryOwner like @uid order by [name]" Dim cmdSecondary As New System.Data.SqlClient.SqlCommand(strSqlSecondary, sqlConn) Dim cmdCompliance As New System.Data.SqlClient.SqlCommand(strSqlCompliance, sqlConn) cmdSecondary.Parameters.AddWithValue("@uid", TNN.NEAt.GetUserID()) cmdCompliance.Parameters.AddWithValue("@uid", TNN.NEAt.GetUserID()) Dim dr As System.Data.SqlClient.SqlDataReader Try sqlConn.Open() Dim root As TreeNode Dim rootNode As TreeNode Dim firstNode As Integer = 0 'Load Primary Owner Node 'Create RootTreeNode dr = cmdSecondary.ExecuteReader() If dr.HasRows Then 'Load Secondary Owner Node 'Create RootTreeNode root = New TreeNode("Secondary Owner", "Secondary Owner") TreeViewGroups.Nodes.Add(root) root.SelectAction = TreeNodeSelectAction.None rootNode = TreeViewGroups.Nodes(firstNode) 'populate the child nodes While dr.Read() Dim child As TreeNode = New TreeNode(dr("Name"), dr("Name")) rootNode.ChildNodes.Add(child) child.SelectAction = TreeNodeSelectAction.None End While dr.Close() cmdSecondary.Dispose() End If 'check if treeview has nodes If TreeViewGroups.Nodes.Count = 0 Then noServers() End If Catch ex As Exception hide() PanelError.Visible = True LabelError.Text = ex.ToString() Finally sqlConn.Dispose() End Try End Sub

    Read the article

  • ORGetValue from Offline Registry - ERROR_MORE_DATA

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • Behavior of retained property while holder is retained

    - by Aurélien Vallée
    Hello everyone, I am a beginner ObjectiveC programmer, coming from the C++ world. I find it very difficult to understand the memory management offered by NSObject :/ Say I have the following class: @interface User : NSObject { NSString* name; } @property (nonatomic,retain) NSString* name; - (id) initWithName: (NSString*) theName; - (void) release; @end @implementation User @synthesize name - (id) initWithName: (NSString*) theName { if ( self = [super init] ) { [self setName:theName]; } return self; } - (void) release { [name release]; [super release]; } @end No considering the following code, I can't understand the retain count results: NSString* name = [[NSString alloc] initWithCString:/*C string from sqlite3*/]; // (1) name retainCount = 1 User* user = [[User alloc] initWithName:name]; // (2) name retainCount = 2 [whateverMutableArray addObject:user]; // (3) name retainCount = 2 [user release]; // (4) name retainCount = 1 [name release]; // (5) name retainCount = 0 At (4), the retain count of name decreased from 2 to 1. But that's not correct, there is still the instance of user inside the array that points to name ! The retain count of a variable should only decrease when the retain count of a referring variable is 0, that is, when it is dealloced, not released.

    Read the article

  • C#/Java: Proper Implementation of CompareTo when Equals tests reference identity

    - by Paul A Jungwirth
    I believe this question applies equally well to C# as to Java, because both require that {c,C}ompareTo be consistent with {e,E}quals: Suppose I want my equals() method to be the same as a reference check, i.e.: public bool equals(Object o) { return this == o; } In that case, how do I implement compareTo(Object o) (or its generic equivalent)? Part of it is easy, but I'm not sure about the other part: public int compareTo(Object o) { if (! (o instanceof MyClass)) return false; MyClass other = (MyClass)o; if (this == other) { return 0; } else { int c = foo.CompareTo(other.foo) if (c == 0) { // what here? } else { return c; } } } I can't just blindly return 1 or -1, because the solution should adhere to the normal requirements of compareTo. I can check all the instance fields, but if they are all equal, I'd still like compareTo to return a value other than 0. It should be true that a.compareTo(b) == -(b.compareTo(a)), and the ordering should stay consistent as long as the objects' state doesn't change. I don't care about ordering across invocations of the virtual machine, however. This makes me think that I could use something like memory address, if I could get at it. Then again, maybe that won't work, because the Garbage Collector could decide to move my objects around. hashCode is another idea, but I'd like something that will be always unique, not just mostly unique. Any ideas?

    Read the article

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • ASP.NET MVC intermittent slow response

    - by arehman
    Problem In our production environment, system occasionally delays the page response of an ASP.NET MVC application up to 30 seconds or so, even though same page renders in 2-3 seconds most of the times. This happens randomly with any arbitrary page, and GET or POST type requests. For example, log files indicates, system took 15 seconds to complete a request for jquery script file or for other small css file it took 10 secs. Similar Problems: Random Slow Downs Production Environment: Windows Server 2008 - Standard (32-bit) - App Pool running in integrated mode. ASP.NET MVC 1.0 We have tried followings/observations: Moved the application to a stand alone web server, but, it didn't help. We didn't ever notice same issue on the server for any 'ASP.NET' application. App Pool settings are fine. No abrupt recycles/shutdowns. No cpu spikes or memory problems. No delays due to SQL queries or so. It seems as something causing delay along HTTP Pipeline or worker processor seeing the request late. Looking for other suggestions. -- Thanks

    Read the article

  • Making two Windows using CreateWindowsEx() - C

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I think that using CreateWindow will allow me to make another window but again i'm not sure how to access the message pump so I can respond to certain events like WM_CREATE on the second window. I hope it makes sense. Thanks! Edit: I've attempted to make a second window and although this does compile, no windows show atall on build. ////////////////////// // WINDOWS FUNCTION // ////////////////////// LRESULT CALLBACK WindowFunc(HWND hMainWindow, UINT message, WPARAM wParam, LPARAM lParam) { //Fields WCHAR buffer[256]; struct DiceData storage; HWND hwnd; // Act on current message switch(message) { case WM_CREATE: AddMenus(hMainWindow); hwnd = CreateWindowEx( 0, "ChildWClass", (LPCTSTR) NULL, WS_CHILD | WS_BORDER | WS_VISIBLE, 0, 0, 0, 0, hMainWindow, NULL, NULL, NULL); ShowWindow(hwnd, SW_SHOW); break; Any suggestions as to why this happens?

    Read the article

  • nhibernate error recovery

    - by Berryl
    I downloaded Rhino Security today and started going through some of the tests. Several that run perfectly in isolation start getting errors after one that purposely raises an exception runs though. Here is that test: [Test] public void EntitesGroup_CanCreate() { var group = _authorizationRepository.CreateEntitiesGroup("Accounts"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<EntitiesGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } And here are the tests and error messages that fail: [Test] public void User_CanSave() { var ayende = new User {Name = "ayende"}; _session.Save(ayende); _session.Flush(); _session.Evict(ayende); var fromDb = _session.Get<User>(ayende.Id); Assert.That(fromDb, Is.Not.Null); Assert.That(ayende.Name, Is.EqualTo(fromDb.Name)); } ----> System.Data.SQLite.SQLiteException : Abort due to constraint violation column Name is not unique [Test] public void UsersGroup_CanCreate() { var group = _authorizationRepository.CreateUsersGroup("Admininstrators"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<UsersGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } failed: NHibernate.AssertionFailure : null id in Rhino.Security.Tests.User entry (don't flush the Session after an exception occurs) Does anyone see how I can reset the state of the in memory SQLite db after the first test? I changed the code to use nunit instead of xunit so maybe that is part of the problem here as well. Cheers, Berryl

    Read the article

  • Compiling C lib and OCaml exe using it, all using ocamlfind

    - by Magnus
    I'm trying to work out how to use ocamlfind to compile a C library and an OCaml executable using that C library. I put together a set of rather silly example files. % cat sillystubs.c #include <stdio.h> #include <caml/mlvalues.h> #include <caml/memory.h> #include <caml/alloc.h> #include <caml/custom.h> value caml_silly_silly( value unit ) { CAMLparam1( unit ); printf( "%s\n", __FILE__ ); CAMLreturn( Val_unit ); } % cat silly.mli external silly : unit -> unit = "silly_silly" % cat foo.ml open Silly open String let _ = print_string "About to call into silly"; silly (); print_string "Called into silly" I believe the following is the way to compile up the library: % ocamlfind ocamlc -c sillystubs.c % ar rc libsillystubs.a sillystubs.o % ocamlfind ocamlc -c silly.mli % ocamlfind ocmalc -a -o silly.cma -ccopt -L${PWD} -cclib -lsillystubs Now I don't seem to be able to use the created library though: % ocamlfind ocamlc -custom -o foo foo.cmo silly.cma /usr/bin/ld: cannot find -lsillystubs collect2: ld returned 1 exit status File "_none_", line 1, characters 0-1: Error: Error while building custom runtime system The OCaml tools are somewhat mysterious to me, so any pointers would be most welcome.

    Read the article

  • SerializationException with custom GenericIdentiy?

    - by MunkiPhD
    I'm trying to implement my own GenericIdentity implementation but keep receiving the following error when it attempts to load the views (I'm using asp.net MVC): System.Runtime.Serialization.SerializationException was unhandled by user code Message="Type is not resolved for member 'OpenIDExtendedIdentity,Training.Web, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'." Source="WebDev.WebHost" I've ended up with the following class: [Serializable] public class OpenIDExtendedIdentity : GenericIdentity { private string _nickName; private int _userId; public OpenIDExtendedIdentity(String name, string nickName, int userId) : base(name, "OpenID") { _nickName = nickName; _userId = userId; } public string NickName { get { return _nickName; } } public int UserID { get { return _userId; } } } In my Global.asax I read a cookie's serialized value into a memory stream and then use that to create my OpenIDExtendedIdentity object. I ended up with this attempt at a solution after countless tries of various sorts. It works correctly up until the point where it attempts to render the views. What I'm essentially attempting to achieve is the ability to do the following (While using the default Role manager from asp.net): User.Identity.UserID User.Identity.NickName ... etc. I've listed some of the sources I've read in my attempt to get this resolved. Some people have reported a Cassini error, but it seems like others have had success implementing this type of custom functionality - thus a boggling of my mind. http://forums.asp.net/p/32497/161775.aspx http://ondotnet.com/pub/a/dotnet/2004/02/02/effectiveformsauth.html http://social.msdn.microsoft.com/Forums/en-US/netfxremoting/thread/e6767ae2-dfbf-445b-9139-93735f1a0f72

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • WPF, C# - Making Intellisense/Autocomplete list, fastest way to filter list of strings

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • C++ -- Is there an implicit cast here from Fred* to auto_ptr<Fred>?

    - by q0987
    Hello all, I saw the following code, #include <new> #include <memory> using namespace std; class Fred; // Forward declaration typedef auto_ptr<Fred> FredPtr; class Fred { public: static FredPtr create(int i) { return new Fred(i); // Is there an implicit casting here? If not, how can we return // a Fred* with return value as FredPtr? } private: Fred(int i=10) : i_(i) { } Fred(const Fred& x) : i_(x.i_) { } int i_; }; Please see the question listed in function create. Thank you // Updated based on comments Yes, the code cannot pass the VC8.0 error C2664: 'std::auto_ptr<_Ty::auto_ptr(std::auto_ptr<_Ty &) throw()' : cannot convert parameter 1 from 'Fred *' to 'std::auto_ptr<_Ty &' The code was copied from the C++ FAQ 12.15. However, after making the following changes, replace return new Fred(i); with return auto_ptr<Fred>(new Fred(i)); This code can pass the VC8.0 compiler. But I am not sure whether or not this is a correct fix.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >