Search Results

Search found 13867 results on 555 pages for 'avoid learning'.

Page 472/555 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Codeigniter common templates

    - by Darthg8r
    Let's say that I have a website that has 100 different pages. Each page uses a common header and footer. Inside the header is some dynamic content that comes from a database. I'd like to avoid having to have code in every single controller and action that passes this common code into the view. function index() { // It sucks to have to include this on every controller action. data['title'] = "This is the index page"; data['currentUserName'] = "John Smith"; $this->load->view("main_view", data); } function comments() { // It sucks to have to include this on every controller action. data['title'] = "Comment list"; data['currentUserName'] = "John Smith"; $this->load->view("comment_view", data); } I realize that I could refactor the code so that the common parts are in a single function and the function is called by the action. Doing so would reduce SOME of the pain, but it still doesn't feel right since I'd still have to make a call to that function every time. What's the correct way to handle this?

    Read the article

  • Really simple JSON serialization in .NET

    - by Evgeny
    I have some simple .NET objects I'd like to serialize to JSON and back again. The set of objects to be serialized is quite small and I control the implementation, so I don't need a generic solution that will work for everything. Since my assembly will be distributed as a library I'd really like to avoid a dependency on some third-party DLL: I just want to give users one assembly that they can reference. I've read the other questions I could find on converting to and from JSON in .NET. The recommended solution of JSON.NET does work, of course, but it requires distributing an extra DLL. I don't need any of the fancy features of JSON.NET. I just need to handle a simple object (or even dictionary) that contains strings, integers, DateTimes and arrays of strings and bytes. On deserializing I'm happy to get back a dictionary - it doesn't need to create the object again. Is there some really simple code out there that I could compile into my assembly to do this simple job? I've also tried System.Web.Script.Serialization.JavaScriptSerializer, but where it falls down is the byte array: I want to base64-encode it and even registering a converter doesn't let me easily accomplish that due to the way that API works (it doesn't pass in the name of the field).

    Read the article

  • Set fields with instrospection - Problem with String.valueOf(String)

    - by fabb
    Hey there! I'm setting public fields of the Object 'this' via reflection. Both the field name and the value are given as String. I use several various field types: Boolean, Integer, Float, Double, an own enum, and a String. It works with all of them except with a String. The exception that gets thrown is that no method with the Signature String.valueOf(String) exists... Now I use a dirty instanceof workaround to detect if each field is a String and in that case just copy the value to the field. private void setField(String field, String value) throws Exception { Field wField = this.getClass().getField(field); if(wField.get(this) instanceof String){ //TODO dirrrrty hack //stupid workaround as java.lang.String.valueOf(java.lang.String) fails... wField.set(this, value); }else{ Method parseMethod = wField.getType().getMethod("valueOf", new Class[]{String.class}); wField.set(this, parseMethod.invoke(wField, value)); } } Any ideas how to avoid that workaround? Do you think java.lang.String should support the method valueOf(String)? thanks, fabb

    Read the article

  • jquery timeout function not working properly

    - by 3gwebtrain
    HI, i ma using the settimeout function to display block and append to 'li', once the mouseover. and i just want to remove the block and make it none. in my funcation works fine. but problem is even just my mouse cross the li, it self the block getting visibile. how to avoid this? my code is: var thisLi; var storedTimeoutID; $("ul.redwood-user li,ul.user-list li").live("mouseover", function(){ thisLi = $(this); var needShow = thisLi.children('a.copier-link'); if($(needShow).is(':hidden')){ storedTimeoutID = setTimeout(function(){ $(thisLi).children('a.copier-link').appendTo(thisLi).show(); },3000); } else { storedTimeoutID = setTimeout(function(){ $(thisLi).siblings().children('a.copier-link').appendTo(thisLi).show(); },3000); } }); $("ul.redwood-user li,ul.user-list li").live("mouseleave", function(){ clearTimeout(storedTimeoutID); //$('ul.redwood-user li').children('a.copier-link').hide(); $('ul.user-list li').children('a.copier-link').hide(); });

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • php: replacing double <br /> with </p><p>

    - by andufo
    i use nicEdit to write RTF data in my CMS. The problem is that it generates strings like this: hello first line<br><br />this is a second line<br />this is a 3rd line since this is for a news site, i much prefer the final html to be like this: <p>hello first line</p><p>this is a second line<br />this is a 3rd line</p> so my current solution is this: i need to trim the $data for <br /> at the start/end of the string replace <br /><br /> with </p><p> (one single <br /> is allowed). finally, add <p> at the start and </p> at the end i only have the 3rd step so far. can someone give me a hand with steps 1 and 2? function replace_br($data) { # step 3 $data = '<p>'.$data.'</p>'; return $data; } thanks! ps: it would be even better to avoid specific situations. example: "hello<br /><br /><br /><br /><br />too much space" -- those 5 breaklines should also be converted to just one "</p><p>"

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • Reasonable expectation to support new Operating Systems?

    - by Neil N
    My company has a desktop app originally developed for Windows XP. The original programmer has since been fired (fired with extreme prejudice I might add). I have fixed the app various times but overall try to avoid it, it is a mess and the only real way to fix it is to completely rewrite it, which could take a year. We have been trying to "forget" about this app, and instead steer clients towards our web version, which is more up to date, easier to maintain, easier to extend, and WAY easier to support. Most clients agree, the web version is just better all around. However we have one client that insists on using the desktop app. The app required a little duct tape to get working on Vista, but now completely breaks on Windows 7. I'm not even sure WHAT all the fixes are to get it working on Win7 (the current time estimate stands at "miracle") but after both installing the RELEASE build, and running the DEBUG build from Visual Studio, the app has errors on nearly every user action, and from what I can see from a high level test run, none of them are related. Since Windows 7 did not exist when this app was developed, is my company really expected to make all the required changes to make it function as "smoothly" as it did on XP?

    Read the article

  • EF Forced Concurrency Checks

    - by Imran
    Hi, I have an issue with EF 4.0 that I hope someone can help with. I currently have an entity that I want to update in a last in wins fashion (i.e. ignore concurrency checks and just overwrite whats in the db with what is submitted). It seems Entity Framework not only includes the primary key of the entity in the where clause of the generated sql, but also any foreign key fields. This is annoying as it means that I don't get true last in wins semantics and need to know what value the fk field had before the update or I get a concurrency exception. I am aware that this can be short circuited by including a foreign key field as well as the navigation property on the entity. I would like to avoid this if possible as it's not a very clean solution. I was just wondering if there was any other way to override this behaviour? It seems like more of a bug than a feature. I have no problem with ef doing concurrency checks if I instruct it to do so but not being able to bypass concurrency completely is a bit of a hindrance as there are many valid scenarios where this is not needed

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • Qt/MFC Migration Framework tool: properly exiting DLL?

    - by User
    I'm using the Qt/MFC Migration Framework tool following this example: http://doc.qt.nokia.com/solutions/4/qtwinmigrate/winmigrate-qt-dll-example.html The dll I build is loaded by a 3rd party MFC-based application. The 3rd party app basically calls one of my exported DLL functions to startup my plugin and another function to shutdown my application. Currently I'm doing nothing in my shutdown function. When I load my DLL in the 3rd party app the startup function is called and my DLL starts successfully and I can see my message box. However if I shutdown my plugin and then try to start it again I get the following error: Debug Error! Program: <my 3rd party app> Module: 4.7.1 File: global\qglobal.cpp Line: 2262 ASSERT failure in QWidget: "Widgets must be created in the GUI thread.", file kernel\qwidget.cpp line 1233 (Press Retry to debug the application) Abort Retry Ignore This makes me think I'm not doing something to properly shutdown my plugin. What do I need to do to shut it down properly? UPDATE: http://doc.qt.nokia.com/solutions/4/qtwinmigrate/winmigrate-walkthrough.html says: The DLL also has to make sure that it can be loaded together with other Qt based DLLs in the same process (in which case a QApplication object will probably exist already), and that the DLL that creates the QApplication object remains loaded in memory to avoid other DLLs using memory that is no longer available to the process. So I wonder if there is some problem where I need to somehow keep the original DLL loaded no matter what?

    Read the article

  • How should I handle this Optimistic Concurrency error in this Entity Framework code, I have?

    - by Pure.Krome
    Hi folks, I have the following pseduo code in some Repository Pattern project that uses EF4. public void Delete(int someId) { // 1. Load the entity for that Id. If there is none, then null. // 2. If entity != null, then DeleteObject(..); } Pretty simple but I'm getting a run-time error:- ConcurrencyException: Store, Update, Insert or Delete statement affected an unexpected number of rows (0). Now, this is what is happening :- Two instances of EF4 are running inthe app at the same time. Instance A calls delete. Instance B calls delete a nano second later. Instance A loads the entity. Instance B also loads the entity. Instance A now deletes that entity - cool bananas. Instance B tries to delete the entity, but it's already gone. As such, the no-count or what not is 0, when it expected 1 .. or something like that. Basically, it figured out that the item it is suppose to delete, didn't delete (because it happened a split sec ago). I'm not sure if this is like a race-condition or something. Anyways, is there any tricks I can do here so the 2nd call doesn't crash? I could make it into a stored procedure.. but I'm hoping to avoid that right now. Any ideas? I'm wondering If it's possible to lock that row (and that row only) when the select is called ... forcing Instance B to wait until the row lock has been relased. By that time, the row is deleted, so when Instance B does it's select, the data is not there .. so it will never delete.

    Read the article

  • ASP.net drop down dynamically styling and then remembering the styles on aborted submit

    - by peacedog
    So, I've got an ASP drop down list (this is .net 2.0). I'm binding it with data. Basically, when the page loads and it's not a post back we'll fetch record data, bind all the drop downs, and set them to their appropriate values (strictly speaking we: initialize page with basic set of data from DB, bind drop downs from DB, fetch actual record data from DB, set drown downs to appropriate settings at this time). What I want to do is selectively style the list options. So the database returns 3 items: ID, Text, and a flag indicating whether I the record is "active" (and I'll style appropriately). It's easy enough to do and I've done it. My problem is what happens when a form submission is halted. We have slightly extended the Page class and created an AddError() method, which will create a list of errors from failed business rule checks and then display them in a ValidationSummary. It works something like this, in the submit button's click event: CheckBizRules(); if(Page.IsValid) { SaveData(); } If any business rule check fails, the Page will not be valid. The problem is, when the page re-renders (viewsate is enabled, but no data is rebound) my beautiful conditional styling is now sadly gone, off to live in the land of the missing socks. I need to preserve it. I was hoping to avoid another DB call here (e.g. getting the list data back from the DB again if the page isn't valid, just for purposes of re-styling the list). But it's not the end of the world if that's my course of action. I was hoping someone might have an alternative suggestion. I couldn't think of how to phrase this question better, if anyone has any suggestions or needs clarification don't hesitate to get it, by force if need be. ;)

    Read the article

  • vector::erase with pointer member

    - by matt
    I am manipulating vectors of objects defined as follow: class Hyp{ public: int x; int y; double wFactor; double hFactor; char shapeNum; double* visibleShape; int xmin, xmax, ymin, ymax; Hyp(int xx, int yy, double ww, double hh, char s): x(xx), y(yy), wFactor(ww), hFactor(hh), shapeNum(s) {visibleShape=0;shapeNum=-1;}; //Copy constructor necessary for support of vector::push_back() with visibleShape Hyp(const Hyp &other) { x = other.x; y = other.y; wFactor = other.wFactor; hFactor = other.hFactor; shapeNum = other.shapeNum; xmin = other.xmin; xmax = other.xmax; ymin = other.ymin; ymax = other.ymax; int visShapeSize = (xmax-xmin+1)*(ymax-ymin+1); visibleShape = new double[visShapeSize]; for (int ind=0; ind<visShapeSize; ind++) { visibleShape[ind] = other.visibleShape[ind]; } }; ~Hyp(){delete[] visibleShape;}; }; When I create a Hyp object, allocate/write memory to visibleShape and add the object to a vector with vector::push_back, everything works as expected: the data pointed by visibleShape is copied using the copy-constructor. But when I use vector::erase to remove a Hyp from the vector, the other elements are moved correctly EXCEPT the pointer members visibleShape that are now pointing to wrong addresses! How to avoid this problem? Am I missing something?

    Read the article

  • Modify request querystring paramaters to build a new link without resorting to string manipulation

    - by Andrew M
    Hi All, I want to dynamically populate a link with the URI of the current request, but set one specific query string parameter. All other querystring paramaters (if there are any) should be left untouched. And I don't know in advance what they might be. Eg, imagine I want to build a link back to the current page, but with the querystring parameter "valueOfInterest" always set to be "wibble" (I'm doing this from the code-behind of an aspx page, .Net 3.5 in C# FWIW). Eg, a request for either of these two: /somepage.aspx /somepage.aspx?valueOfInterest=sausages would become: /somepage.aspx?valueOfInterest=wibble And most importantly (perhaps) a request for: /somepage.aspx?boring=something /somepage.aspx?boring=something&valueOfInterest=sausages would preserve the boring params to become: /somepage.aspx?boring=something&valueOfInterest=wibble Caveats: I'd like to avoid string manipulation if there's something more elegant in asp.net that is more robust. However if there isn't something more elegant, so be it. I've done (a little) homework: I found a blog post which suggested copying the request into a local HttpRequest object, but that still has a read-only collection for the querystring params. I've also had a look at using a URI object, but that doesn't seem to have a querystring

    Read the article

  • Sql Server Backup and move backup file: How to cope with file permissions?

    - by Stefan Steinegger
    With our product we have a simple backup tool for the sql server database. This tool should just make a full backup and restore to and from any folder. Of course, the user (usually an administrator) needs permission to write to the target folder. To avoid the problem of not being able to perform a backup to a network drive, I write the backup to a temp file in the Sql Server backup directory. Then I move it to the target folder. This requires permission to delete the temporary file from the sql servers backup folder. Restore is the same in the other direction. This seemed to work fine until someone tested it on vista, where the user does not have write access to the backup folder by default. So there are many solutions to solve this, but none of them seemed to be really nice. One solution would be to find another folder for the temporary file. Both the sql server user as well as the administrator performing the backup need read and write permissions. Is there such a directory? Any other ideas? Thanks a lot.

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • RichEdit VCL and URLs. Workarounds for OnPaint Issues.

    - by HX_unbanned
    So, issue is with the thing Delphi progies scare to death - Rich Edit in Windows ( XP and pre-XP versions ). Situation: I have added EM_AUTOURLDETECTION in OnCreate of form. Target - RichEdit1. Then, I have form, that is "collapsed" after showing form. RichEdit Control is sattic, visible and enabled, but it is "hidden" because form window is collapsed. I can expand and collapse form, using Button1 and changing forms Constraints and Size properties. After first time I expand form, the URL inside RichEdit1 control is highlighted. BUT - After second, third, fourth, etc... time I Collapse and Expand form, the RichEdit1 Control does not highlight URL anymore. I have tried EM_SETTEXTMODE messages, also WM_UPDATEUISTATE, also basic WM_TEXT message - no luck. It sems like this merssage really works ( enables detection ) while sending keyboard strokes ( virtual keycodes ), but not when text has been modified. Also - I am thinking to rewrite code to make RichEdit Control dynamic. Would this fix the problem? Maybe solution is to override OnPaint / OnDraw method to avoid highlight ( formatting ) losing when collapsing or expanding form? Weird is that my Embarcadero Documentation says this function must work in any moment text has been modified. Why it does not work? Any help appreciated. I am making this Community Wiki because this is common problem and togewther we cam find solution, right? :) Also - follow-ups and related Question: http://stackoverflow.com/questions/738694/override-onpaint http://stackoverflow.com/questions/478071/how-to-autodetect-urls-in-richedit-2-0 http://www.vbforums.com/archive/index.php/t-59959.html

    Read the article

  • How to set up single array or dictionary for use in multiple datasources?

    - by Roman
    I have multiple TableViewDatasources that need to display list of objects form same pool depending of certain property. E.g. object.flag1 is set- it will show up in TableView1 object.flag2 is set- it will show up in TableView2 The obvious way would be to have separate arrays for each TableView, But same object may appear in different arrays. Also I need to update objects very often or access all objects through same array. How do I setup a single dictionary or array to have all objects in one structure? To put it in another way: When table view or selection changes, application need to redraw TableViews with the new data. Application have to access the pool of objects and search through them using iterator and accessing each object and its properties. I think that this is an expensive operation and want to avoid that. Perhaps maybe by making a global pool of objects a dictionary and exposing objects properties as dictionary fields. So instead of iterating global pool of objects I could access global pool Dicitonary in a manner of database by selecting objects that has fields that match particular criteria. Anyone know any example of doing that?

    Read the article

  • Code Organization Connundrum: Web Project With Multiple Supporting DLLs?

    - by Code Sherpa
    Hi. I am trying to get a handle on the best practice for code organization within my project. I have looked around on the internet for good examples and, so far, I have seen examples of a web project with one or multiple supporting class libraries that it references or a web project with sub-folders that follow its namespace conventions. Assuming there is no right answer, this is what I currently have for code organization: MyProjectWeb This is my web site. I am referencing my class libraries here. MyProject.DLL As the base namespace, I am using this DLL for files that need to be generally consumable. For example, my class "Enums" that has all the enumerations in my project lives there. As does class MyProjectException for all exception handling. MyProject.IO.DLL This is a grouping of maybe 20 files that handle file upload and download (so far). MyProject.Utilities.DLL ALl my common classes and methods bunched up together in one generally consumable DLL. Each class follows a "XHelper" convention such as "SqlHelper, AuthHelper, SerializationHelper, and so on... MyProject.Web.DLL I am using this DLL as the main client interface. Right now, the majority of class files here are: 1) properties (such as School, Location, Account, Posts) 2) authorization stuff ( such as custom membership, custom role, & custom profile providers) My question is simply - does this seem logical? Also, how do I avoid having to cross reference DLLs from one project library to the next? For example, MyProject.Web.DLL uses code from MyProject.Utilities.DLL and MyProject.Utilities.DLL uses code from MyProject.DLL. Is this solved by clicking on properties and selecting "Dependencies"? I tried that but still don't seem to be accessing the namespaces of the assembly I have selected. Do I have to reference every assembly I need for each class library? Responses appreciated and thanks for your patience.

    Read the article

  • Algorithm(s) for rearranging simple symbolic algebraic expressions

    - by Gabe Johnson
    Hi, I would like to know if there is a straightforward algorithm for rearranging simple symbolic algebraic expressions. Ideally I would like to be able to rewrite any such expression with one variable alone on the left hand side. For example, given the input: m = (x + y) / 2 ... I would like to be able to ask about x in terms of m and y, or y in terms of x and m, and get these: x = 2*m - y y = 2*m - x Of course we've all done this algorithm on paper for years. But I was wondering if there was a name for it. It seems simple enough but if somebody has already cataloged the various "gotchas" it would make life easier. For my purposes I won't need it to handle quadratics. (And yes, CAS systems do this, and yes I know I could just use them as a library. I would like to avoid such a dependency in my application. I really would just like to know if there are named algorithms for approaching this problem.)

    Read the article

  • protecting COM interfaces from exceptions

    - by rmeador
    I have several dozen objects exposed through COM interfaces, each of which with many methods, totaling a few hundred methods. These interfaces expose business objects from my app to a scripting engine. I have been given the task of protecting every single one of these methods from exceptions being thrown (to catch them and return an error using COM's Error() function, which incidentally I can find no documentation on because it's impossible to google). To my understanding, this requires that I add a try/catch around the guts of each one of these methods. The catch blocks are going to be similar or identical for each and every one of these hundreds of methods, which strongly smells of a problem (massively violates the DRY principle), but I can't think of any way to avoid changing every method. As far as I can tell, these methods are invoked directly by COM, with no intervening code that I can hook into to catch the exceptions. My current best idea is to make a macro for the catch block, but that has it's own sort of code-smell. Can anyone come up with a better approach? BTW, my app's exceptions do not derive from std::exception, so if there is some way of COM automatically handling standard exceptions, it won't help. And I sadly cannot change the existing exceptions to derive from std::exception.

    Read the article

  • What is a good programming language/environment for Linux database applications?

    - by Dkellygb
    I could use some advice on my move from the Windows world to Linux. For my business, I have used VB6 and Microsoft Access with both Access databases and SQL server in the past. The easy to use forms, report writers and programming language were perfect for CRUD apps and analysis for our small hotel/restaurant business. After using Linux at home for some time I would like to convert our small business. Our server is already a Linux box using Samba. I am happy with the OpenOffice.org applications instead of Microsoft Office. The only thing which is holding me back is a desktop database application where I can develop the forms and reports we require. Base does not seem to be up to the job yet from my experience. I would like something like VB.Net with visual studio (express) but I would like to avoid Mono – I just don’t see the point of it. (You can correct me if I’m wrong.) But a good collection of forms, controls and a good report writer would be ideal. I have looked at web based stuff like Ruby on Rails, but I think a webserver for our 5 pc network is overkill. I don’t mind running a proper database on our Ubuntu 9.10 server. I may have exposed a few prejudices above but my mind is open. Any thoughts?

    Read the article

  • How to delete a QProcess instance correctly?

    - by Kopfschmerzen
    Hi everyone! I have a class looking like this: class FakeRunner : public QObject { Q_OBJECT private: QProcess* proc; public: FakeRunner(); int run() { if (proc) return -1; proc = new QProcess(); QStringList args; QString programName = "fake.exe"; connect(comp, SIGNAL(started()), this, SLOT(procStarted())); connect(comp, SIGNAL(error(QProcess::ProcessError)), this, SLOT(procError(QProcess::ProcessError))); connect(comp, SIGNAL(finished(int, QProcess::ExitStatus)), this, SLOT(procFinished(int, QProcess::ExitStatus))); proc->start(programName, args); return 0; }; private slots: void procStarted() {}; void procFinished(int, QProcess::ExitStatus) {}; void procError(QProcess::ProcessError); } Since "fake.exe" does not exist on my system, proc emits the error() signal. If I handle it like following, my program crashes: void FakeRunner::procError(QProcess::ProcessError rc) { delete proc; proc = 0; } It works well, though, if I don't delete the pointer. So, the question is how (and when) should I delete the pointer to QProcess? I believe I have to delete it to avoid a memory leak. FakeRunner::run() can be invoked many times, so the leak, if there is one, will grow. Thanks!

    Read the article

  • ProtoInclude for fields ?

    - by Big
    I have a simple object [ProtoContract] public class DataChangedEventArgs<T> : EventArgs { private readonly object key; private readonly T data; private readonly DataChangeType changeType; ///<summary> /// Key to identify the data item ///</summary> public object Key { get { return key; } } [ProtoMember(2, IsRequired = true)] public T Data { get { return data; } } [ProtoMember(3, IsRequired = true)] public DataChangeType ChangeType { get { return changeType; } } and I have a problem with the key. Its type is object, but it can be either int, long or string. I would intuitively use a ProtoInclude attribute to say "expect these types" but unfortunately they are class only attribute. Does anybody has any idea how I could work around this ? For background, the public object Key is here for historical reasons (and all over the place) so I would very much like to avoid the mother of all refactorings ;-) Any chance I could get this to Serialize, even force it to Serialize as a string ?

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >