Search Results

Search found 7338 results on 294 pages for 'useful'.

Page 260/294 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Unselect Databound Combobox Winforms .NET

    - by joedotnot
    The problem: combobox is databound to a DataView, first item in the dataview is DataRowView whose fields are DBNull.Value; Combo DropdownStyle = ComboBoxStyle.DropDownList Loads fine, displays fine, selects fine, problem is to Unselect via code. Setting the SelectedIndex to 0 throws an exception. (Setting to -1 is a no-no as per msdn doco that says dont set SelectedIndex=-1 if databound) So how to unselect without throwing an exception ? For now i wrapped it into a try/catch to just ignore the error! EDIT: As asked by Hubeza, i worked on sample code to post. Did a stripped down version of the original code in C# (original is in VB.NET) and could NOT reproduce it either. Converted to VB.NET and could NOT reproduce it either ! In other words, SelectedIndex = 0 does work in the stripped down version! Currently further investigating what else could be wrong with the original code. EDIT2: Case Closed. Call me a stupid fool if you like, and apologies for wasting anyone's time - The error was originating from MyComboBox_SelectedIndexChanged event, which i neglected to check ! May as well post the sample in case anyone finds useful. private void LoadComboMethod() { DataTable dtFruit = new DataTable("FruitTable"); //define columns DataColumn colID = new DataColumn(); colID.DataType = typeof(Int32); //VB.NET GetType(Int32) colID.ColumnName = "ID"; DataColumn colDesc = new DataColumn(); colDesc.DataType = typeof(String); colDesc.ColumnName = "Description"; //add columns to table dtFruit.Columns.AddRange(new DataColumn[] { colID, colDesc }); //add rows DataRow row = dtFruit.NewRow(); row[colID] = 1; row[colDesc] = "Apples"; dtFruit.Rows.Add(row); row = dtFruit.NewRow(); row[colID] = 1; row[colDesc] = "Bananas"; dtFruit.Rows.Add(row); row = dtFruit.NewRow(); row[colID] = 1; row[colDesc] = "Oranges"; dtFruit.Rows.Add(row); //add extra blank row. DataRowView drv = dtFruit.DefaultView.AddNew(); drv.EndEdit(); //Bind combo box DataView dv = new DataView(dtFruit); dv.Sort = "ID ASC"; //ensure blank item on top cboFruit.DataSource = dv; cboFruit.DisplayMember = "Description"; cboFruit.ValueMember = "ID"; } private void UnselectComboMethod() { if (cboFruit.SelectedIndex > 0) { cboFruit.SelectedIndex = 0; } else { MessageBox.Show("no fruit selected"); } }

    Read the article

  • Which mobile operating system should I code for?

    - by samgoody
    It seems as though mobile computing has fully arrived. I would like to rewrite two of our programs for mobile devices, but am a bit lost as to which platform to target. Complicating this decision: I would need to learn the relevant languages and IDEs - my coding to date has been almost all web based (PHP, JS, Actionscript, etc. Some ASPX). Most users seem to be religious about their mobile decision, so oral conversations leave me more confused then enlightened. I do not yet own a smartphone - will have to buy one once I know which platform to be aiming for. Both of my programs are more for business users, (one is only useful for C.P.A.s). I am a single developer, and cannot develop for more than one platform at a time. Getting it right is important. Based on what I've found on the web, I would've expected RIM to be a shoo-in, and the general order to be as follows: RIM Blackberry - More of them than any other brand. Despite naysayers, they've had double the sales (or perhaps 5X the sales) of any other smartphone, and have continued to grow. And, they have business users. Android - According to Schmidt, they have outsold everyone else except RIM (though I can't find where I read that now), and they are just getting started. According to Comscore, they are already at 8% of the market and expected to hit Shcmidt's claims within six months. Nokia - The largest worldwide. If they would just make up between Maemo or Symbian, I would be far less confused. iPhone - Much more competition by other apps, fewer sales to be had, and a overlord that can delay or cancel my app at any time. Is Cocoa hard to learn? Windows Mobile - Word is that version 7 will not be backwards compatible and losing market share. Palm WebOS - Perhaps this should go first, as it is the only one that offers tools to make my life easy as a web application developer. No competition in marketplace. But not very many users either. However, a search on StackOverflow shows a hugely disproportionate number of iPhone questions versus Blackberry. Likewise, there are clearly more apps on iPhone, so it must be getting developer love. What is the one platform I should develop for? Please back up your answer with the logic.

    Read the article

  • Web P2P video confrence solution

    - by dtroy
    I'm looking for the best possible solution which will allow me to incorporate live video/audio conference between 2 users(only 2 at this point) into a flash gaming platform. The video chat is not just an extra feature, it's the main one. I'm mainly looking at open source implementations or something I'll be able to implement myself, but will consider commercial products if they are exactly what I need. Here are a few things I've looked at, but so far, I didn't find any of them good enough: Flash player 10's P2P capabilities sound promising, but I am aware of the fact that Adobe has not release any information on the RTMFP protocol and that there is no commercial server which supports it at this point. Stream all the video/audio live through a flash server (not p2p), but from my personal experience you don't get a smooth conversation. I think TokBox uses this method Java applets are a possible solution too (to perform p2p), but I don't think it will be a nice and elegant solution to combine them in the game at this point (and requires the user to authorize them). BTW, I couldn't find any useful implementations. So, If you know of any, i'll look into them. Google Gmail Video Chat uses a custom (and proprietary) browser plug-in which does the p2p and streams the video/audio into the flash player. This is a possible solution, but I rather not implement the entire p2p protocol stack + browser plug-in at this stage and concentrate on other aspect of the game itself. I think they are using XMPP based protocol similar to Jingle and they've release a Jingle librarby but without the video confrencing implementation. EDIT: In response to Branden: I am aware of Adobe Stratus. Stratus is a beta, hosted rendezvous service that aids establishing communications between Flash Player endpoints (RTMFP server). This current release of the Stratus is prerelease and is designed for evaluation purposes only. The service is not final. There is no guarantee that the service will continue to exist in the future or any information about the future cost. That's why I don't think it can be used as a commercial solution. At least not yet. I'd appreciate your suggestions and advice. thanks!

    Read the article

  • How to fine tune a Membership Provider?

    - by Venemo
    After all the answers to my last question about fine-tuning turned out to be more useful than I expected, I thought that I would ask another similar Question about the MembershipProviders as well. Okay, so firstly, to clarify: I know what a Membership, Role, and Profile provider is, how to implement my own, and how to configure them, and most of the things about them. Implementing a role and profile provider is pretty straightforward, because they only require simple CRUD most of the time. (A single line of LINQ is enough for about half of the RoleProvider's methods.) However, the Membership provider is a differend beast. Many of you may realize that it violates the SR (Single Responsibility) principle, because it has to do EVERYTHING related to user management. While this leaves a lot of room for customizations, it has its downsides as well. There is no information on the Internet about what their EXACT expected behaviour is, such as when should they throw exceptions or simply return null, and stuff like that. I use this sample implementation for reference, but it also contains several contradictions. For example, it uses its own ValidateUser method for checking for credentials in the ChangePassword method. But the ValidateUser also updates the user's LastLoginDate to the current date. So, does the framework expect that I set it in my own provider as well, or is it simply a mistake in the sample? The other is: the ChangePassword method throws an exception every time when validating the new password, but CreateUser doesn't ever throw an exception, it simply returns false. And last, but not least: it counts the invalid password attempts of the user and locks them if it passes a threshold. While this is good, but it requires manual action to unlock the users. Is it a problem if my provider automatically unlocks the user after a certain amount of time? (EDIT) I almost forgot: the CreateUser method in the sample inserts the ID from the method parameter. I actually think this is bad practice, because I use inters with auto incement as IDs, so inserting them from some method parameter is not an option. Should I just ignore the parameter, or require that its value is null and throw an exception if it isn't? All in all, does ASP.NET have any assumptions about the behaviour of a MembershipProvider? Is there any documentation which describes when should I throw an exception or just return null? I also tried to find a set of generic unit tests which would provide some guidance about the expected behaviour, but no luck, I found plenty of articles about "Unit testing is good", and "How to unit test a MembershipProvider", but not one where there would be any actual tests. Thanks in advance for everyone!

    Read the article

  • Detect Client Computer name when an RDP session is open

    - by Ubiquitous Che
    Hey all, My manager has pointed out to me a few nifty things that one of our accounting applications can do because it can load different settings based on the machine name of the host and the machine name of the client when the package is opened in an RDP session. We want to provide similar functionality in one of my company's applications. I've found out on this site how to detect if I'm in an RDP session, but I'm having trouble finding information anywhere on how to detect the name of the client computer. Any pointers in the right direction would be great. I'm coding in C# for .NET 3.5 EDIT The sample code I cobbled together from the advice below - it should be enough for anyone who has a use for the WTSQuerySessionInformation to get a feel for what's going on. Note that this isn't necessarily the best way of doing it - just a starting point that I've found useful. When I run this locally, I get boring, expected answers. When I run it on our local office server in an RDP session, I see my own computer name in the WTSClientName property. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace TerminalServicesTest { class Program { const int WTS_CURRENT_SESSION = -1; static readonly IntPtr WTS_CURRENT_SERVER_HANDLE = IntPtr.Zero; static void Main(string[] args) { StringBuilder sb = new StringBuilder(); uint byteCount; foreach (WTS_INFO_CLASS item in Enum.GetValues(typeof(WTS_INFO_CLASS))) { Program.WTSQuerySessionInformation( WTS_CURRENT_SERVER_HANDLE, WTS_CURRENT_SESSION, item, out sb, out byteCount); Console.WriteLine("{0}({1}): {2}", item.ToString(), byteCount, sb); } Console.WriteLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } [DllImport("Wtsapi32.dll")] public static extern bool WTSQuerySessionInformation( IntPtr hServer, int sessionId, WTS_INFO_CLASS wtsInfoClass, out StringBuilder ppBuffer, out uint pBytesReturned); } enum WTS_INFO_CLASS { WTSInitialProgram = 0, WTSApplicationName = 1, WTSWorkingDirectory = 2, WTSOEMId = 3, WTSSessionId = 4, WTSUserName = 5, WTSWinStationName = 6, WTSDomainName = 7, WTSConnectState = 8, WTSClientBuildNumber = 9, WTSClientName = 10, WTSClientDirectory = 11, WTSClientProductId = 12, WTSClientHardwareId = 13, WTSClientAddress = 14, WTSClientDisplay = 15, WTSClientProtocolType = 16, WTSIdleTime = 17, WTSLogonTime = 18, WTSIncomingBytes = 19, WTSOutgoingBytes = 20, WTSIncomingFrames = 21, WTSOutgoingFrames = 22, WTSClientInfo = 23, WTSSessionInfo = 24, WTSSessionInfoEx = 25, WTSConfigInfo = 26, WTSValidationInfo = 27, WTSSessionAddressV4 = 28, WTSIsRemoteSession = 29 } }

    Read the article

  • Uses of a C++ Arithmetic Promotion Header

    - by OlduvaiHand
    I've been playing around with a set of templates for determining the correct promotion type given two primitive types in C++. The idea is that if you define a custom numeric template, you could use these to determine the return type of, say, the operator+ function based on the class passed to the templates. For example: // Custom numeric class template <class T> struct Complex { Complex(T real, T imag) : r(real), i(imag) {} T r, i; // Other implementation stuff }; // Generic arithmetic promotion template template <class T, class U> struct ArithmeticPromotion { typedef typename X type; // I realize this is incorrect, but the point is it would // figure out what X would be via trait testing, etc }; // Specialization of arithmetic promotion template template <> class ArithmeticPromotion<long long, unsigned long> { typedef typename unsigned long long type; } // Arithmetic promotion template actually being used template <class T, class U> Complex<typename ArithmeticPromotion<T, U>::type> operator+ (Complex<T>& lhs, Complex<U>& rhs) { return Complex<typename ArithmeticPromotion<T, U>::type>(lhs.r + rhs.r, lhs.i + rhs.i); } If you use these promotion templates, you can more or less treat your user defined types as if they're primitives with the same promotion rules being applied to them. So, I guess the question I have is would this be something that could be useful? And if so, what sorts of common tasks would you want templated out for ease of use? I'm working on the assumption that just having the promotion templates alone would be insufficient for practical adoption. Incidentally, Boost has something similar in its math/tools/promotion header, but it's really more for getting values ready to be passed to the standard C math functions (that expect either 2 ints or 2 doubles) and bypasses all of the integral types. Is something that simple preferable to having complete control over how your objects are being converted? TL;DR: What sorts of helper templates would you expect to find in an arithmetic promotion header beyond the machinery that does the promotion itself?

    Read the article

  • Which management tools would you recommend for software development?

    - by Robert Schneider
    What would you generally recommend for software developement? Which combination do you use or would you recommend? I assume that tools are needed for Version control, issues/bugs/task, Release management, Requirement management. Tools for Test management and Project management should be accounted for as well, I guess. Did I forget anything? Maybe tools for Continous Integration. I'm not interested in a halfway combination of one ore two tools like a Subversion + Bugzilla (I know they are good but for a company they might not be sufficient). And also tools like make/ant shouldn't be taken into account. I'd like to know a combination that covers all what is important for professional software development. However, it could be a single tool of course if it covers all the management issues. What do you think would be a good combination? I assume a combination should be regardes as good if the tools itself are good but also if they have good integrations. Udate: Something like Ant is just a script not a management tool. Okay: we do already have Perforce. But this is somehow generic. We have different projects that uses C-, VS/.NET-, Python-, PHP and we will be starting new projects in Java. Plenty of languages and frameworks, though some are going to be legacy. So the tools should be generic. Obviously we don't want to use Bugzilla for one project and Jira for another. The management tools have to be generic. Further thoughts: Think of sourceforge: For each project they offer a version control system, an issue tracker, a wiki, ... totally independent of what the project is about. Those are some management tools that a project usually needs. I would say that most companies need such a set of tools. But I could imagine, if a company has several projects, they need further tools too: for Project management, Quality management, ... And I could imagine there are some recommendable combinations of tools. A not really apropiate comparison to this could be LAMP (Linux, Apache, MySQL, PHP). A combination of software that has evolved since they work very well together, and are pretty useful for a lot of applications (or rather web sites). So maybe there are some recommendable management tool combinations that also fit toghether like LAMP. The integration is important. Thank you for the incredibly fast and helpful responses!!!

    Read the article

  • Why am I seeing a crash when trying to call CDHtmlDialog::OnInitDialog()

    - by Tim
    I added a helpAbout menu item to my mfc app. I decided to make the ddlg derive from CDHTMLDialog. I override the OnInitDialog() method in my derived class and the first thing I do is call the parent's OnInitDialog() method. I then put in code that sets the title. On some machines this works fine, but on others it crashes in the call to CDHtmlDialog::OnInitDialog() - Trying to read a null pointer. the call stack has nothing useful - it is in mfc90.dll Is this a potential problem with mismatches of mfc/win32 dlls? It works on my vista machines but crashes on a win2003 server box. BOOL HTMLAboutDlg::OnInitDialog() { // CRASHES on the following line CDHtmlDialog::OnInitDialog(); CString title = "my title"; // example of setting title // i try to get version info //set the title CModuleVersion ver; char filename[ _MAX_PATH ]; GetModuleFileName( AfxGetApp()->m_hInstance, filename, _MAX_PATH ); ver.GetFileVersionInfo(filename); // get version from VS_FIXEDFILEINFO struct CString s; s.Format("Version: %d.%d.%d.%d\n", HIWORD(ver.dwFileVersionMS), LOWORD(ver.dwFileVersionMS), HIWORD(ver.dwFileVersionLS), LOWORD(ver.dwFileVersionLS)); CString version = ver.GetValue(_T("ProductVersion")); version.Remove(' '); version.Replace(",", "."); title = "MyApp - Version " + version; SetWindowText(title); return TRUE; // return TRUE unless you set the focus to a control } And here is the relevant header file: class HTMLAboutDlg : public CDHtmlDialog { DECLARE_DYNCREATE(HTMLAboutDlg) public: HTMLAboutDlg(CWnd* pParent = NULL); // standard constructor virtual ~HTMLAboutDlg(); // Overrides HRESULT OnButtonOK(IHTMLElement *pElement); HRESULT OnButtonCancel(IHTMLElement *pElement); // Dialog Data enum { IDD = IDD_DIALOG_ABOUT, IDH = IDR_HTML_HTMLABOUTDLG }; protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support virtual BOOL OnInitDialog(); DECLARE_MESSAGE_MAP() DECLARE_DHTML_EVENT_MAP() }; I can't figure out what is going on - why it works on some machins and crashes on others. Both have VS2008 installed EDIT: VS versions VISTA - no crashes 9.0.30729.1 SP 2003 server: (crashes) 9.0.21022.8 RTM

    Read the article

  • Flipping PDF pages in QuartzDemo application

    - by BittenApple
    I am working on modifying the quartzdemo application so that it can show another page from a multipage pdf on tap screen event. So far I have stripped it down to the point where it displays the PDF on any page as initial page, but I can't seem to get it figured out how to make the quartzview display anything else then the initial page. I'm guessing that the pushViewController has something to do with this. Anyway, here's my method that draws the initial page and in sits in QuartzImages.m file: -(void)drawNewPage:(CGContextRef)myContext { CGContextTranslateCTM(myContext, 0.0, self.bounds.size.height); CGContextScaleCTM(myContext, 1.0, -1.0); CGContextSetRGBFillColor(myContext, 255, 255, 255, 1); CGContextFillRect(myContext, CGRectMake(0, 0, 320, 412)); CGPDFPageRef page = CGPDFDocumentGetPage(pdfFajl, pageNumber); CGContextSaveGState(myContext); CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); CGContextConcatCTM(myContext, pdfTransform); CGContextDrawPDFPage(myContext, page); CGContextRestoreGState(myContext); } This gets fired up in MainViewController.m so: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // ### DISPLAY DESIRED PAGE QuartzViewController *targetViewController = [self controllerAtIndexPath:indexPath]; if ([targetViewController.quartzView isKindOfClass:[QuartzPDFView1 class]]) { ((QuartzPDFView1 *)targetViewController.quartzView).pageNumber = 1; CFURLRef pdfURL = CFBundleCopyResourceURL(CFBundleGetMainBundle(), CFSTR("mypdftest.pdf"), NULL, NULL); ((QuartzPDFView1 *)targetViewController.quartzView).pdfFajl = CGPDFDocumentCreateWithURL((CFURLRef)pdfURL); CFRelease(pdfURL); } [[self navigationController] pushViewController:targetViewController animated:YES]; } Since I'm fairly new to the iPhone development (I'm currently only on page 123 in "beginning iphone 3 development" book) these controllers confuse me as hell. I have setup an event which gets fired up fine on screen tap. The event sits in QuartzView.m and I get a simple alert on screen tap showing that the tap was registered. I would like to use this method to draw a new page instead of showing the alert, page number doesn't matter, I'll work on incrementing (page flipping) later, I need an example of how to call these methods (if its even possible). These are defined in QuartzView.h and I planned on using them in the tap event method: CGPDFDocumentRef pdfFajl; int pageNumber; Thankful for any useful input from "the internets" :)

    Read the article

  • ObjectDisposedException from core .NET code

    - by John
    I'm having this issue with a live app. (Unfortunately this is post-mortem debugging - I only have this stack trace. I've never seen this personally, nor am I able to reproduce). I get this Exception: message=Cannot access a disposed object. Object name: 'Button'. exceptionMessage=Cannot access a disposed object. Object name: 'Button'. exceptionDetails=System.ObjectDisposedException: Cannot access a disposed object. Object name: 'Button'. at System.Windows.Forms.Control.CreateHandle() at System.Windows.Forms.Control.get_Handle() at System.Windows.Forms.Control.PointToScreen(Point p) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) exceptionSource=System.Windows.Forms exceptionTargetSite=Void CreateHandle() It looks like a mouse event is arriving at a form after the form has been disposed. Note there is none of my code in this stack trace. The only weird (?) thing I'm doing, is that I do tend to Dispose() Forms quite aggressively when I use them with ShowModal() (see "Aside" below). But I only do this after ShowModal() has returned (that should be safe right)? I think I read that events might be queued up in the event queue, but I can't believe this would be the problem. I mean surely the framework must be tolerant to old messages? I can well imagine that under stress messages might back-log and surely the window might go away at any time? Any ideas? If you could even suggest ways of reproducing, that might be useful. John Aside: TBH I've never quite understood whether calling Dispose() after Form.ShowDialog() is strictly necessary - the MSDN docs for ShowDialog() are to my mind a bit ambiguous.

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • PNGException "crc corruption" when attempting to create ImageIcon objects from ZIP archive

    - by Nathan Strong
    I've got a ZIP file containing a number of PNG images that I am trying to load into my Java application as ImageIcon resources directly from the archive. Here's my code: import java.io.*; import java.util.Enumeration; import java.util.zip.*; import javax.swing.ImageIcon; public class Test { public static void main( String[] args ) { if( args.length == 0 ) { System.out.println("usage: java Test.java file.zip"); return; } File archive = new File( args[0] ); if( !archive.exists() || !archive.canRead() ) { System.err.printf("Unable to find/access %s.\n", archive); return; } try { ZipFile zip = new ZipFile(archive); Enumeration <? extends ZipEntry>e = zip.entries(); while( e.hasMoreElements() ) { ZipEntry entry = (ZipEntry) e.nextElement(); int size = (int) entry.getSize(); int count = (size % 1024 == 0) ? size / 1024 : (size / 1024)+1; int offset = 0; int nread, toRead; byte[] buffer = new byte[size]; for( int i = 0; i < count; i++ ) { offset = 1024*i; toRead = (size-offset > 1024) ? 1024 : size-offset; nread = zip.getInputStream(entry).read(buffer, offset, toRead); } ImageIcon icon = new ImageIcon(buffer); // boom -- why? } zip.close(); } catch( IOException ex ) { System.err.println(ex.getMessage()); } } } The sizes reported by entry.getSize() match the uncompressed size of the PNG files, and I am able to read the data out of the archive without any exceptions, but the creation of the ImageIcon blows up. The stacktrace: sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) sun.awt.image.PNGImageDecoder$PNGException: crc corruption at sun.awt.image.PNGImageDecoder.getChunk(PNGImageDecoder.java:699) at sun.awt.image.PNGImageDecoder.getData(PNGImageDecoder.java:707) at sun.awt.image.PNGImageDecoder.produceImage(PNGImageDecoder.java:234) at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:246) at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:172) at sun.awt.image.ImageFetcher.run(ImageFetcher.java:136) Can anyone shed some light on it? Google hasn't turned up any useful information.

    Read the article

  • How can I use functools.partial on multiple methods on an object, and freeze parameters out of order

    - by Joseph Garvin
    I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameter being frozen (think of it as generalizing partial to apply to classes). I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works: >>> def foo(x, y): ... print x, y ... >>> bar = bind(foo, y=3) >>> bar(2) 2 3 But my proxy class does not work, and I'm not sure why: >>> class Foo(object): ... def bar(self, x, y): ... print x, y ... >>> a = Foo() >>> b = PureProxy(a, bar=bind(Foo.bar, y=3)) >>> b.bar(2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: bar() takes exactly 3 arguments (2 given) I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows. from types import MethodType class PureProxy(object): def __init__(self, underlying, **substitutions): self.underlying = underlying for name in substitutions: subst_attr = substitutions[name] if hasattr(subst_attr, "underlying"): setattr(self, name, MethodType(subst_attr, self, PureProxy)) def __getattribute__(self, name): return getattr(object.__getattribute__(self, "underlying"), name) def bind(f, *args, **kwargs): """ Lets you freeze arguments of a function be certain values. Unlike functools.partial, you can freeze arguments by name, which has the bonus of letting you freeze them out of order. args will be treated just like partial, but kwargs will properly take into account if you are specifying a regular argument by name. """ argspec = inspect.getargspec(f) argdict = copy(kwargs) if hasattr(f, "im_func"): f = f.im_func args_idx = 0 for arg in argspec.args: if args_idx >= len(args): break argdict[arg] = args[args_idx] args_idx += 1 num_plugged = args_idx def new_func(*inner_args, **inner_kwargs): args_idx = 0 for arg in argspec.args[num_plugged:]: if arg in argdict: continue if args_idx >= len(inner_args): # We can't raise an error here because some remaining arguments # may have been passed in by keyword. break argdict[arg] = inner_args[args_idx] args_idx += 1 f(**dict(argdict, **inner_kwargs)) new_func.underlying = f return new_func

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • What's the correct way to use Cakephp urls?

    - by Pichan
    Hello all, it's my first post here :) I'm having some difficulties with dealing with urls and parameters. I've gone through the router class api documentation over and over again and found nothing useful. First of all, I'd like to know if there is any 'universal' format in CakePHP(1.3) for handling urls. I'm currently handling all my urls as simple arrays(in the format that Router::url and $html-link accepts) and it's easy as long as I only need to pass them as arguments to cake's own methods. It usually gets tricky if I need something else. Mainly I'm having problems with converting string urls to the basic array format. Let's say I want to convert $arrayUrl to string and than again into url: $arrayUrl=array('controller'=>'SomeController','action'=>'someAction','someValue'); $url=Router::url($arrayUrl); //$url is now '/path/to/site/someController/someAction/someValue' $url=Router::normalize($url); //remove '/path/to/site' $parsed=Router::parse($url); /*$parsed is now Array( [controller] => someController [action] => someAction [named] => Array() [pass] => Array([0] => someValue) [plugin] => ) */ That seems an awful lot of code to do something as simple as to convert between 2 core formats. Also, note that $parsed is still not in the same as $arrayUrl. Of course I could tweak $parsed manually and actually I've done that a few times as a quick patch but I'd like to get to the bottom of this. I also noticed that when using prefix routing, $this-params in controller has the prefix embedded in the action(i.e. [action] = 'admin_edit') and the result of Router::parse() does not. Both of course have the prefix in it's own key. To summarize, how do I convert an url between any of these 3(or 4, if you include the prefix thing) mentioned formats the right way? Of course it would be easy to hack my way through this, but I'd still like to believe that cake is being developed by a bunch of people who have a lot more experience and insight than me, so I'm guessing there's a good reason for this "perceived misbehavior". I've tried to present my problem as good as I can, but due to my rusty english skills, I had to take a few detours :) I'll explain more if needed.

    Read the article

  • Trying to retrieve the contents of UITableView cell on selection by user.

    - by chubsta
    Hi, this is my first post here and as i am very new (2 weeks in...) to iPhone (or any!) development i am a little unsure as to how much detail is needed or relevant, so please forgive me if I dont provide enough to be useful at this stage... Anyway, on to my problem - I have set up a UITableView-based app, which has a .plist for holding the data. The .plist has a dictionary as its root, followed by a number of Arrays, each containing the data-strings that display in the table. Everything is working fine until the point where i select a row. I have set it up so i get an alert with the results of the button press and it is here that i want to see the contents of the cell being produced. Instead of the expected string eg. "data line 1", all i get is the number of the row within the section. I have gone backwards and forwards but dont seem to be getting anywhere although i am sure it is something simple. The code compiles fine, with no warnings or errors, and if i can just get the string of the selected cell i will be well on my way, so any help appreciated. I know that I need to do more following the 'NSUInteger row = [indexPath row]; part but thats where my mind dries up... here is the relevant 'selection' section, if i need to post more code please let me know, and i really appreciate any help with this...(please forgive my 'alert' message, i was just happy at that point that i get SOMETHING back from the row selection!) (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [tableView deselectRowAtIndexPath:indexPath animated:YES]; NSUInteger row = [indexPath row]; NSString *message = [[NSString alloc] initWithFormat: @"You selected Cell %d from this Section, "@"which is a very good choice indeed!" @"Unfortunately I can't work out how to get the info out of the cell so it's not much use at the moment!" @"Still, this is a good chance to see how much space I have in an alert box!", row]; UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"My God! It works..." message:message delegate:nil cancelButtonTitle:@"You are awesome Karl!!" otherButtonTitles:nil]; [alert show]; [message release]; [alert release]; }

    Read the article

  • Help with infrequent segmentation fault in accessing struct

    - by Sarah
    I'm having trouble debugging a segmentation fault. I'd appreciate tips on how to go about narrowing in on the problem. The error appears when an iterator tries to access an element of a struct Infection, defined as: struct Infection { public: explicit Infection( double it, double rt ) : infT( it ), recT( rt ) {} double infT; // infection start time double recT; // scheduled recovery time }; These structs are kept in a special structure, InfectionMap: typedef boost::unordered_multimap< int, Infection > InfectionMap; Every member of class Host has an InfectionMap carriage. Recovery times and associated host identifiers are kept in a priority queue. When a scheduled recovery event arises in the simulation for a particular strain s in a particular host, the program searches through carriage of that host to find the Infection whose recT matches the recovery time (double recoverTime). (For reasons that aren't worth going into, it's not as expedient for me to use recT as the key to InfectionMap; the strain s is more useful, and coinfections with the same strain are possible.) assert( carriage.size() > 0 ); pair<InfectionMap::iterator,InfectionMap::iterator> ret = carriage.equal_range( s ); InfectionMap::iterator it; for ( it = ret.first; it != ret.second; it++ ) { if ( ((*it).second).recT == recoverTime ) { // produces seg fault carriage.erase( it ); } } I get a "Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address..." on the line specified above. The recoverTime is fine, and the assert(...) in the code is not tripped. As I said, this seg fault appears 'randomly' after thousands of successful recovery events. How would you go about figuring out what's going on? I'd love ideas about what could be wrong and how I can further investigate the problem.

    Read the article

  • right usage of std::uncaught_exception in a destructor

    - by Vokuhila-Oliba
    There are some articles concluding "never throw an exception from a destructor", and "std::uncaught_exception() is not useful", for example: http://www.gotw.ca/gotw/047.htm (written by Herb Sutter) But it seems that I am not getting the point. So I wrote a small testing example (see below). Since everything is fine with the testing example I would very appreciate some comments regarding what might be wrong with it. testing results: ./main Foo::~Foo(): caught exception - but have pending exception - ignoring int main(int, char**): caught exception: from int Foo::bar(int) ./main 1 Foo::~Foo(): caught exception - but *no* exception is pending - rethrowing int main(int, char**): caught exception: from Foo::~Foo() // file main.cpp // build with e.g. "make main" // tested successfully on Ubuntu-Karmic with g++ v4.4.1 #include <iostream> class Foo { public: int bar(int i) { if (0 == i) throw(std::string("from ") + __PRETTY_FUNCTION__); else return i+1; } ~Foo() { bool exc_pending=std::uncaught_exception(); try { bar(0); } catch (const std::string &e) { // ensure that no new exception has been created in the meantime if (std::uncaught_exception()) exc_pending = true; if (exc_pending) { std::cerr << __PRETTY_FUNCTION__ << ": caught exception - but have pending exception - ignoring" << std::endl; } else { std::cerr << __PRETTY_FUNCTION__ << ": caught exception - but *no* exception is pending - rethrowing" << std::endl; throw(std::string("from ") + __PRETTY_FUNCTION__); } } } }; int main(int argc, char** argv) { try { Foo f; // will throw an exception in Foo::bar() if no arguments given. Otherwise // an exception from Foo::~Foo() is thrown. f.bar(argc-1); } catch (const std::string &e) { std::cerr << __PRETTY_FUNCTION__ << ": caught exception: " << e << std::endl; } return 0; }

    Read the article

  • Restoring multiple database backups in a transaction

    - by Raghu Dodda
    I wrote a stored procedure that restores as set of the database backups. It takes two parameters - a source directory and a restore directory. The procedure looks for all .bak files in the source directory (recursively) and restores all the databases. The stored procedure works as expected, but it has one issue - if I uncomment the try-catch statements, the procedure terminates with the following error: error_number = 3013 error_severity = 16 error_state = 1 error_message = DATABASE is terminating abnormally. The weird part is sometimes (it is not consistent) the restore is done even if the error occurs. The procedure: create proc usp_restore_databases ( @source_directory varchar(1000), @restore_directory varchar(1000) ) as begin declare @number_of_backup_files int -- begin transaction -- begin try -- step 0: Initial validation if(right(@source_directory, 1) <> '\') set @source_directory = @source_directory + '\' if(right(@restore_directory, 1) <> '\') set @restore_directory = @restore_directory + '\' -- step 1: Put all the backup files in the specified directory in a table -- declare @backup_files table ( file_path varchar(1000)) declare @dos_command varchar(1000) set @dos_command = 'dir ' + '"' + @source_directory + '*.bak" /s/b' /* DEBUG */ print @dos_command insert into @backup_files(file_path) exec xp_cmdshell @dos_command delete from @backup_files where file_path IS NULL select @number_of_backup_files = count(1) from @backup_files /* DEBUG */ select * from @backup_files /* DEBUG */ print @number_of_backup_files -- step 2: restore each backup file -- declare backup_file_cursor cursor for select file_path from @backup_files open backup_file_cursor declare @index int; set @index = 0 while(@index < @number_of_backup_files) begin declare @backup_file_path varchar(1000) fetch next from backup_file_cursor into @backup_file_path /* DEBUG */ print @backup_file_path -- step 2a: parse the full backup file name to get the DB file name. declare @db_name varchar(100) set @db_name = right(@backup_file_path, charindex('\', reverse(@backup_file_path)) -1) -- still has the .bak extension /* DEBUG */ print @db_name set @db_name = left(@db_name, charindex('.', @db_name) -1) /* DEBUG */ print @db_name set @db_name = lower(@db_name) /* DEBUG */ print @db_name -- step 2b: find out the logical names of the mdf and ldf files declare @mdf_logical_name varchar(100), @ldf_logical_name varchar(100) declare @backup_file_contents table ( LogicalName nvarchar(128), PhysicalName nvarchar(260), [Type] char(1), FileGroupName nvarchar(128), [Size] numeric(20,0), [MaxSize] numeric(20,0), FileID bigint, CreateLSN numeric(25,0), DropLSN numeric(25,0) NULL, UniqueID uniqueidentifier, ReadOnlyLSN numeric(25,0) NULL, ReadWriteLSN numeric(25,0) NULL, BackupSizeInBytes bigint, SourceBlockSize int, FileGroupID int, LogGroupGUID uniqueidentifier NULL, DifferentialBaseLSN numeric(25,0) NULL, DifferentialBaseGUID uniqueidentifier, IsReadOnly bit, IsPresent bit ) insert into @backup_file_contents exec ('restore filelistonly from disk=' + '''' + @backup_file_path + '''') select @mdf_logical_name = LogicalName from @backup_file_contents where [Type] = 'D' select @ldf_logical_name = LogicalName from @backup_file_contents where [Type] = 'L' /* DEBUG */ print @mdf_logical_name + ', ' + @ldf_logical_name -- step 2c: restore declare @mdf_file_name varchar(1000), @ldf_file_name varchar(1000) set @mdf_file_name = @restore_directory + @db_name + '.mdf' set @ldf_file_name = @restore_directory + @db_name + '.ldf' /* DEBUG */ print 'mdf_logical_name = ' + @mdf_logical_name + '|' + 'ldf_logical_name = ' + @ldf_logical_name + '|' + 'db_name = ' + @db_name + '|' + 'backup_file_path = ' + @backup_file_path + '|' + 'restore_directory = ' + @restore_directory + '|' + 'mdf_file_name = ' + @mdf_file_name + '|' + 'ldf_file_name = ' + @ldf_file_name restore database @db_name from disk = @backup_file_path with move @mdf_logical_name to @mdf_file_name, move @ldf_logical_name to @ldf_file_name -- step 2d: iterate set @index = @index + 1 end close backup_file_cursor deallocate backup_file_cursor -- end try -- begin catch -- print error_message() -- rollback transaction -- return -- end catch -- -- commit transaction end Does anybody have any ideas why this might be happening? Another question: is the transaction code useful ? i.e., if there are 2 databases to be restored, will SQL Server undo the restore of one database if the second restore fails?

    Read the article

  • Is there any reasonable use of a function returning an anonymous struct?

    - by Akanksh
    Here is an (artificial) example of using a function that returns an anonymous struct and does "something" useful: #include <iostream> template<typename T> T* func( T* t, float a, float b ) { if(!t) { t = new T; t->a = a; t->b = b; } else { t->a += a; t->b += b; } return t; } struct { float a, b; }* foo(float a, float b) { if(a==0) return 0; return func(foo(a-1,b), a, b); } int main() { std::cout << foo(5,6)->a << std::endl; std::cout << foo(5,6)->b << std::endl; void* v = (void*)(foo(5,6)); float* f = (float*)(v); //[1] delete f now because I know struct is floats only. std::cout << f[0] << std::endl; std::cout << f[1] << std::endl; delete[] f; return 0; } There are a few points I would like to discuss: As is apparent, this code leaks, is there anyway I can NOT leak without knowing what the underlying struct definition is? see Comment [1]. I have to return a pointer to an anonymous struct so I can create an instance of the object within the templatized function func, can I do something similar without returning a pointer? I guess the most important, is there ANY (real-world) use for this at all? As the example given above leaks and is admittedly contrived. By the way, what the function foo(a,b) does is, to return a struct containing two numbers, the sum of all numbers from 1 to a and the product of a and b. EDIT: Maybe the line new T could use a boost::shared_ptr somehow to avoid leaks, but I haven't tried that. Would that work?

    Read the article

  • Lazy loading the addthis script? (or lazy loading external js content dependent on already fired eve

    - by Keith Bentrup
    I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for firefox). Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do? I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible b/c I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur. Nonetheless, this is an interesting problem b/c I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around?? And/or are any of my assumptions wrong? Edit: Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation. This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with external widgets that you do not have control over and may or may not be obfuscated delaying the load of the external widgets until they are needed or at least, til substantially after everything else has been loaded including other deferred elements b/c of the how the widget was written, precludes existing, typical lazy loading paradigms While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by yahoo, google, and amazon show it to be important to your user's experience. **My testing with hammerhead and personal experience indicates that this will be my savings in this case.

    Read the article

  • Dependency property does not work within a geometry in a controltemplate

    - by Erik Bongers
    I have a DepencencyProperty (a boolean) that works fine on an Ellipse, but not on an ArcSegment. Am I doing something that is not possible? Here's part of the xaml. Both the TemplateBindings of Origin and LargeArc do not work in the geometry. But the LargeArc DependencyProperty does work in the Ellipse, so my DependencyProperty seems to be set up correctly. <ControlTemplate TargetType="{x:Type nodes:TestCircle}"> <Canvas Background="AliceBlue"> <Ellipse Height="10" Width="10" Fill="Yellow" Visibility="{TemplateBinding LargeArc, Converter={StaticResource BoolToVisConverter}}"/> <Path Canvas.Left="0" Canvas.Top="0" Stroke="Black" StrokeThickness="3"> <Path.Data> <GeometryGroup> <PathGeometry> <PathFigure IsClosed="True" StartPoint="{TemplateBinding Origin}"> <LineSegment Point="150,100" /> <ArcSegment Point="140,150" IsLargeArc="{TemplateBinding LargeArc}" Size="50,50" SweepDirection="Clockwise"/> </PathFigure> </PathGeometry> </GeometryGroup> </Path.Data> </Path> </Canvas> </ControlTemplate> What I'm trying to build is a (sort of) pie-shaped usercontrol where the shape of the Pie is defined by DependencyProperties and the actual graphics used are in a template, so they can be replaced or customized. In other words: I would like the code-behind to be visual-free (which, I assume, is good separation). SOLUTION--------------------------(I'm not allowed to answer my own questions yet) I found the answer myself, and this can be useful for others encountering the same issue. This is why the TemplateBinding on the Geometry failed: A TemplateBinding will only work when binding a DependencyProperty to another DependencyProperty. Following article set me on the right track: http://blogs.msdn.com/b/liviuc/archive/2009/12/14/wpf-templatebinding-vs-relativesource-templatedparent.aspx The ArcSegment properties are no DependencyProperties. Thus, the solution to the above problem is to replace <ArcSegment Point="140,150" IsLargeArc="{TemplateBinding LargeArc}" with <ArcSegment Point="140,150" IsLargeArc="{Binding RelativeSource={RelativeSource TemplatedParent}, Path=LargeArc}" Colin, your working example where an 'ordinary' binding was used in the geometry set me on the right track. BTW, love the infographics and the construction of your UserControl in your blogpost. And, hey, that quick tip on code snippets, and especially on that DP attribute and the separation of those DPs into a partial class file is pure gold!

    Read the article

  • What are the types and inner workings of a query optimizer?

    - by Frank Developer
    As I understand it, most query optimizers are cost-based. Some can be influenced by hints like FIRST_ROWS(). Others are tailored for OLAP. Is it possible to know more detailed logic about how Informix IDS and SE's optimizers decide what's the best route for processing a query, other than SET EXPLAIN? Is there any documentation which illustrates the ranking of SELECT statements? I would imagine that "SELECT col FROM table WHERE ROWID = n" ranks 1st. What are the rest of them?.. If I'm not mistaking, Informix's ROWID is a SERIAL(INT) which allows for a max. of 2GB nrows, or maybe it uses INT9 for TB's nrows?.. However, I think Oracle uses HEX values for ROWID. Too bad ROWID can't be oftenly used, since a rows ROWID can change. So maybe ROWID is used by the optimizer as a counter? Perhaps, it could be used for implementing the query progress idea I mentioned in my "Begin viewing query results before query completes" question? For some reason, I feel it wouldn't be that difficult to report a query's progress while being processed, perhaps at the expense of some slight overhead, but it would be nice to know ahead of time: A "Google-like" estimate of how many rows meet a query's criteria, display it's progress every 100, 200, 500 or 1,000 rows, give users the ability to cancel it at anytime and start displaying the qualifying rows as they are being put into the current list, while it continues searching?.. This is just one example, perhaps we could think other neat/useful features, the ingridients are more or less there. Perhaps we could fine-tune each query with more granularity than currently available? OLTP queries tend to be mostly static and pre-defined. The "what-if's" are more OLAP, so let's try to add more control and intelligence to it? So, therefore, being able to more precisely control, not "hint-influence" a query is what's needed and therefore it would be necessary to know how the optimizers logic is programmed. We can then have Dynamic SELECT and other statements for specific situations! Maybe even tell IDS to read blocks of indexes nodes at-a-time instead of one-by-one, etc. etc.

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • AD - DirectoryServices: VBNET2.0 - Speaking architecture...

    - by Will Marcouiller
    I've been mandated to write an application to migrate the Active Directory access models to another environment. Here's the context: I'm stuck with VB.NET 2005 and .NET Framework 2.0; The application must use the Windows authenticated user to manage AD; The objects I have to handle are Groups, Users and OrganizationalUnits; I intend to use the Façade design pattern to provider ease of use and a fully reusable code; I plan to write a factory for each of the objects managed (group, ou, user); The use of Attributes should be useful here, I guess; As everything is about the DirectoryEntry class when accessing the AD, it seems a good candidate for generic types. Obligatory features: User creates new OUs manually; User creates new group manually; User creates new user (these users are services accounts) manually; Application reads an XML file which contains the OUs, groups and users to create; Application informs the user about the OUs, groups and users that shall be created; User specifies the domain environment where to migrate the XML input file designated objects; User makes changes if needed, and launches the task operations; Application performs required by the XML input file operations against the underlying AD as specified by the user; Application informs the user upon completion. Linear features: User fetches OUs, groups, users; User changes OUs, groups, users; User deletes OUs, groups, users; The application logs AD entries and operations performed, plus errors and exceptions; Nice-to-have features: Application rollbacks operations on error or exception. I've been working for weeks now to get acquainted with the AD and the System.DirectoryServices assembly. But I don't seem to find a way to be fully satisfied with what I'm doing and always looking for better. I have studied Bret de Smet's Linq to AD on CodePlex, but then again, I can't use it as I'm stuck with .NET 2.0, so no Linq! But I've learned about Attributes, and seen that he's working with generic types as he codes a DirectorySource class to perform the operations for OUs, groups and users. Any suggestions? Thanks for any help, code sample, ideas, architural solution, everything!

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >