Search Results

Search found 8056 results on 323 pages for 'external'.

Page 280/323 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • c++ Mixing printf with wprintf (or cout with wcout)

    - by Bo Jensen
    I know you should not mix printing with printf,cout and wprintf,wcout, but have a hard time finding a good answer why and if it is possible to get round it. The problem is I use a external library that prints with printf and my own uses wcout. If I do a simple example it works fine, but from my full application it simply does not print the printf statements. If this is really a limitation, then there would be many libraries out there which can not work together with wide printing applications. Any insight on this is more than welcome. Update : I boiled it down to : #include <stdio.h> #include <stdlib.h> #include <iostream> #include <readline/readline.h> #include <readline/history.h> int main() { char *buf; std::wcout << std::endl; /* ADDING THIS LINE MAKES PRINTF VANISH!!! */ rl_bind_key('\t',rl_abort);//disable auto-complete while((buf = readline("my-command : "))!=NULL) { if (strcmp(buf,"quit")==0) break; std::wcout<<buf<< std::endl; if (buf[0]!=0) add_history(buf); } free(buf); return 0; } So I guess it might be a flushing problem, but it still looks strange to me, I have to check up on it.

    Read the article

  • How to make active services highly available?

    - by Jader Dias
    I know that with Network Load Balancing and Failover Clusteringwe can make passive services highly available. But what about active apps? Example: One of my apps retrieves some content from a external resource in a fixed interval. I have imagined the following scenarios: Run it in a single machine. Problem: if this instance falls, the content won't be retrieved Run it in each machine of the cluster. Problem: the content will be retrieved multiple times Have it in each machine of the cluster, but run it only in one of them. Each instance will have to check some sort of common resource to decide whether it its turn to do the task or not. When I was thinking about the solution #3 I have wondered what should be the common resource. I have thought of creating a table in the database, where we could use it to get a global lock. Is this the best solution? How does people usually do this? By the way it's a C# .NET WCF app running on Windows Server 2008

    Read the article

  • What information should a SVN/Versioned file commit comment contain?

    - by RenderIn
    I'm curious what kind of content should be in a versioned file commit comment. Should it describe generally what changed (e.g. "The widget screen was changed to display only active widgets") or should it be more specific (e.g. "A new condition was added to the where clause of the fetchWidget query to retrieve only active widgets by default") How atomic should a single commit be? Just the file containing the updated query in a single commit (e.g. "Updated the widget screen to display only active widgets by default"), or should that and several other changes + interface changes to a screen share the same commit with a more general description like ("Updated the widget screen: A) display only active widgets by default B) added button to toggle showing inactive widgets") I see subversion commit comments being used very differently and was wondering what others have had success with. Some comments are as brief as "updated files", while others are many paragraphs long, and others are formatted in a way that they can be queried and associated with some external system such as JIRA. I used to be extremely descriptive of the reason for the change as well as the specific technical changes. Lately I've been scaling back and just giving a general "This is what I changed on this page" kind of comment.

    Read the article

  • Extension methods for encapsulation and reusability

    - by tzaman
    In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself?

    Read the article

  • How do I populate a MEF plugin with data that is not hard coded into the assembly?

    - by hjoelr
    I am working on a program that will communicate with various pieces of hardware. Because of the varied nature of the items it communicates and controls, I need to have a different "driver" for each different piece of hardware. This made me think that MEF would be a great way to make those drivers as plugins that can be added even after the product has been released. I've looked at a lot of examples of how to use MEF, but the question that I haven't been able to find an answer to is how to populate a MEF plugin with external data (eg. from a database). All the examples I can find have the "data" hard-coded into the assembly, like the following example: [Export( typeof( IRule ) )] public class RuleInstance : IRule { public void DoIt() {} public string Name { get { return "Rule Instance 3"; } } public string Version { get { return "1.1.0.0"; } } public string Description { get { return "Some Rule Instance"; } } } What if I want Name, Version and Description to come from a database? How would I tell MEF where to get that information? I may be missing something very obvious, but I don't know what it is. Thanks! Joel

    Read the article

  • TortoiseSVN lists files as modified, but they are identical

    - by BJ Safdie
    I am merging a hot fix from our QA branch back into our Dev branch. Five files have changed. I do a fresh checkout of the Dev branch. I then do a merge (range of revisions) from QA into the Dev working copy. It brings in five files and there is a conflict on an external and ignore property -- which I resolve by "using local" (dev). When I check modifications or commit, I expect to see the five files I merged as the only changes. However, I get close to 700 "modified" files showing up in the commit dialog. If I select one of these file and "Compare with base," WinMerge comes up and says the "files are identical." I have tried this with the file dates set to "last committed" and not. Why are all of these files showing up as modified, when they are identical? What in the merge is causing this? How do I prevent SVN/TortoiseSVN from getting confused this way in the future?

    Read the article

  • iPod library song path access

    - by Narendra Kumar
    I studied a lot but did not find any good answer. My problem is i am calculating beats per minute of song.I used Bass api for that, now problem is i am able to get bpm of a file which i have in my resource folder but i have to get bpm of all songs of iPod library. I am getting path of song from MPMediaItemPropertyAssetURL property of MpMediaItem but when passing this one in api api is saying stream cant load BASS_StreamCreateFile(). In my point of view i am not getting right path of song. How can we access valid path? Did any one access ipod library song with external api? Please help me . Thanks CODE IS THIS NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL]; NSString *respath = [NSString stringWithFormat:@"%@",[assetURL absoluteString]]; BASS_SetConfig(BASS_CONFIG_IOS_MIXAUDIO, 0); // Disable mixing. To be called before BASS_Init. if (HIWORD(BASS_GetVersion()) != BASSVERSION) { NSLog(@"An incorrect version of BASS was loaded"); } // Initialize default device. if (!BASS_Init(-1, 44100, 0, NULL, NULL)) { //textView.text = [NSString stringWithFormat:@"%@ CAN'T Load Stream",textView.text]; } DWORD chan1; if(!(chan1=BASS_StreamCreateFile(FALSE, [respath UTF8String], 0, 0, BASS_SAMPLE_LOOP))) { NSLog(@"Can't load stream!"); textView.text = [NSString stringWithFormat:@"%@ not loading...",textView.text]; } mainStream=BASS_StreamCreateFile(FALSE, [respath cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_SAMPLE_FLOAT|BASS_STREAM_PRESCAN|BASS_STREAM_DECODE); float playBackDuration=BASS_ChannelBytes2Seconds(mainStream, BASS_ChannelGetLength(mainStream, BASS_POS_BYTE)); NSLog(@"Play back duration is %f",playBackDuration); HSTREAM bpmStream=BASS_StreamCreateFile(FALSE, [respath UTF8String], 0, 0, BASS_STREAM_PRESCAN|BASS_SAMPLE_FLOAT|BASS_STREAM_DECODE); //BASS_ChannelPlay(bpmStream,FALSE); BpmValue= BASS_FX_BPM_DecodeGet(bpmStream,0.0, playBackDuration, MAKELONG(45,256), BASS_FX_BPM_MULT2| BASS_FX_BPM_MULT2 | BASS_FX_FREESOURCE, (BPMPROCESSPROC*)proc); textView.text = [NSString stringWithFormat:@"%@ %f",textView.text,BpmValue];

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Why StrinUtils Apache class is not recognized in android?

    - by Maxood
    Why import org.apache.commons.lang.StringUtils cannot be imported in android by default. Do i have to include an external library? Then where can i find that library on the web? package com.myapps.urlencoding; import android.app.Activity; import org.apache.commons.lang.StringUtils; public class EncodeIdUtil extends Activity { /** Called when the activity is first created. */ private static Long multiplier=Long.parseLong("1zzzz",36); /** * Encodes the id. * @param id the id to encode * @return encoded string */ public static String encode(Long id) { return StringUtils.reverse(Long.toString((id*multiplier), 35)); } /** * Decodes the encoded id. * @param encodedId the encodedId to decode * @return the Id * @throws IllegalArgumentException if encodedId is not a validly encoded id. */ public static Long decode(String encodedId) throws IllegalArgumentException { long product; try { product = Long.parseLong(StringUtils.reverse(encodedId), 35); } catch (Exception e) { throw new IllegalArgumentException(); } if ( 0 != product % multiplier || product < 0) { throw new IllegalArgumentException(); } return product/multiplier; } }

    Read the article

  • Unity Configuration and Same Assembly

    - by tyndall
    I'm currently getting an error trying to resolve my IDataAccess class. The value of the property 'type' cannot be parsed. The error is: Could not load file or assembly 'TestProject' or one of its dependencies. The system cannot find the file specified. (C:\Source\TestIoC\src\TestIoC\TestProject\bin\Debug\TestProject.vshost.exe.config line 14) This is inside a WPF Application project. What is the correct syntax to refer to the Assembly you are currently in? is there a way to do this? I know in a larger solution I would be pulling Types from seperate assemblies so this might not be an issue. But what is the right way to do this for a small self-contained test project. Note: I'm only interested in doing the XML config at this time, not the C# (in code) config. UPDATE: see all comments My XML config: <configuration> <configSections> <section name="unity" type="Microsoft.Practices.Unity.Configuration.UnityConfigurationSection, Microsoft.Practices.Unity.Configuration" /> </configSections> <unity> <typeAliases> <!-- Lifetime manager types --> <typeAlias alias="singleton" type="Microsoft.Practices.Unity.ContainerControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="external" type="Microsoft.Practices.Unity.ExternallyControlledLifetimeManager, Microsoft.Practices.Unity" /> <typeAlias alias="IDataAccess" type="TestProject.IDataAccess, TestProject" /> <typeAlias alias="DataAccess" type="TestProject.DataAccess, TestProject" /> </typeAliases> <containers> <container name="Services"> <types> <type type="IDataAccess" mapTo="DataAccess" /> </types> </container> </containers> </unity> </configuration>

    Read the article

  • Looking for a web based, embeddable file manager

    - by Kristi H.
    I'm looking for a web based, embeddable file manager. I haven't found anything suitable through Google or the Stack Overflow archives. Does anyone know of a file manager that meets the following criteria? The file manager must… …be embeddable (e.g. Flash or a Java applet) …run from my server (no storing uploads in a remote location) …be licensed for business use (fees are okay) …let me disable uploads. Users shouldn't be able to upload files, only download them …allow me to tag files and/or place them in folders …let me disable authentication or handle it from the backend. My app already has a login system and I don't want to make users log in twice …allow users to select and download multiple files …have an attractive interface The file manager should… …have an external config file …display thumbnail previews for files …be skinnable …be actively developed or at least actively maintained Thanks for your help. I need a sophisticated file manager and I'd like to avoid writing it from scratch.

    Read the article

  • How to parse text as JavaScript?

    - by Danjah
    This question of mine (currently unanswered), drove me toward finding a better solution to what I'm attempting. My requirements: chunks of code which can be arbitrarily added into a document, without an identifier: [div class="thing"] [elements... /] [/div] the objects are scanned for and found by an external script: var things = yd.getElementsBy(function(el){ return yd.hasClass('thing'); },null,document ); the objects must be individually configurable, what I have currently is identifier-based: [div class="thing" id="thing0"] [elements... /] [script type="text/javascript"] new Thing().init({ id:'thing0'; }); [/script] [/div] So I need to ditch the identifier (id="thing0") so there are no duplicates when more than one chunk of the same code is added to a page I still need to be able to config these objects individually, without an identifier SO! All of that said, I wondered about creating a dynamic global variable within the script block of each added chunk of code, within its script tag. As each 'thing' is found, I figure it would be legit to grab the innerHTML of the script tag and somehow convert that text into a useable JS object. Discuss. Ok, don't discuss if you like, but if you get the drift then feel free to correct my wayward thinking or provide a better solution - please! d

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • Creating Emulated iSCSI Target in a Lab/Testing Environment using Windows Server 2008 R2

    - by Brian McCleary
    We have a single server running Windows Server 2008 with Hyper-V installed running 5 virtual machines. I have purchased a second DELL R805 Server so that we can create a fail-over cluster to our current R805 that is currently in production. Right now, our R805 connects via iSCSI to a MD3000i iSCSI SAN. Before we try to roll out the second server and clustering to our production environment, I want to be able to test and "play with" the clustering features in our lab before rolling it out. The problem is that I don't want to spend a couple thousand dollars on another iSCSI SAN server just for testing. I already have two servers in my lab that are installed with Windows Server 2008 R2 64bit (one is the R805 and another spare desktop that was laying around) and with the Hyper-V roll enabled that should be ready to test with, but I don't have an iSCSI target to use as the Cluster Shared Volume. Is there anyway to install, either on a Hyper-V image or on a external spare computer that we have some sort of emulated iSCSI target? In our lab, we obviously don't need a real SAN, just something that we can test out how to setup the clustering properly outside of our production environment. Any advise is appreciated. FYI - I have read Jose Barret's blog on WUDSS at http://blogs.technet.com/josebda/archive/2008/01/07/installing-the-evaluation-version-of-wudss-2003-refresh-and-the-microsoft-iscsi-software-target-version-3-1-on-a-vm.aspx, but it seems awfully complex. I'm hoping for an easier solution.

    Read the article

  • New Rails deployment half working...not sure why?

    - by Meltemi
    I'm in the final stages of going round trip through the entire Rails cycle: development - test - production (on an external server). I'm very close...but seeing some errors with the production version and don't know enough about Rails' "magic" to troubleshoot it yet... all seems well and I can load my app by going to www.mydomain.com/rails and it seems to work...I can interact with my app and create new objects in my database. However, when I go to www.mydomain.com/rails/ (difference is the trailing slash) i get a webpage that just says "Index from public" on it?!? I don't know where that's coming from? index.html is long removed from public. this may or may not be related to why I can't access, say, the first record in one of my classes by calling its controller like so: www.mydomain.com/rails/mycontroller/1 . and yes, there are records in the database. anyway, something's askew. it works fine on development box. but, only half working on production server. anyone know what might be causing this? this app is running on the server box (not development) Installed Passenger on server to serve app via Apache modified my httpd.conf with Passenger's stuff (headache but finally got it to respond) loaded schema into the production database: rake db:schema:load RAILS_ENV=production

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • What options exist to get text/ list items to appear in columns?

    - by alex
    I have a bunch of HTML markup coming from an external source, and it is mainly h3 elements and ul elements. I want to have the content flow in 3 columns. I know there is something coming in CSS3, but what options do I have to get this content to flow nicely into the 3 columns at the time being? I'm not that concerned about IE6 (as long as it degrades gracefully). Am I stuck using jQuery to parse the markup and chop it up into 3 divs which float? Thank you Update As per request, here is some of the HTML I am working with <h3>Tourism Industry</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> <h3>Small Business</h3> <ul> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> <li><a href="">Something</a></li> </ul> And a whole lot more following the same format.

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • How can I have a VBA macro perform a Search/Replace in formulas across an entire workbook?

    - by richardtallent
    I distribute an Excel workbook to a number of users, and they are supposed to have a particular macro file pre-installed in their XLSTART folder. If they don't have the macro installed correctly, and they send the workbook back to me, any formulas depending on it include the full path of the macro, e.g.: 'C:\Documents and Settings\richard.tallent\Application Data\ Microsoft\Excel\XLSTART\pcs.xls'!MyMacroFunction() I want to create a quick macro I can use to remove the improper path from every formula in the workbook. I've successfully retrieved the special folder using GetSpecialFolder (an external function that is working just fine), but the Replace call itself shown below throws an "Application-defined or object-defined error". Public Sub FixBrokenMacroFormulas() Dim badpath As String badpath = "'" & GetSpecialFolder(AppDataFolder) & "\Microsoft\Excel\XLSTART\mymacro.xls'!" Dim Sheet As Worksheet For Each Sheet In ActiveWorkbook.Sheets Call Sheet.Activate Call Sheet.Select(True) Call Selection.Replace(What:=badpath, Replacement:="", LookIn:=xlFormulas) ''//LookAt:=xlPart, MatchCase:=False, SearchFormat:=False, ReplaceFormat:=False Next End Sub Automating search and replace isn't exactly my forte, what am I doing wrong? I've already commented out some of the parameters that don't seem essential.

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • What is the preferred way to update database schemas in multiple production environments

    - by rmarimon
    I am about to install some 20 servers with the same web application in multiple locations connected to their own local database. I will be updating the web applications remotely (perhaps using debian's package manager) and I'm sure will eventually need to update the database schemas. Since each server could be eventually be using a different release of the web application, I need a way to apply the incremental changes to the servers. I'm thinking something like this. Let's start with database.schema.1 as the original release of the database and assume this number increases with each new version of the schema. I eventually could end up with database.schema.17 as the current release. For a new installation this would be the schema to install. It seems to me that I would need consecutive translations like database.translation.1.2 which would convert database.schema.1 into database.schema.2, database.translation.2.3 to convert from 2 to 3 and so on until 17. It seems that whenever I change a schema I need to alter the database but perhaps I need to run some script to update the data which might be done with SQL but might require an external non sql script. What is the appropriate way to organize all these files? What is the automatic way to apply those upgrades to the schema? Where do I store the current version number of the schema?

    Read the article

  • Return if remote stored procedure fails

    - by njk
    I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE]. USE [LOCAL] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[monthlyRollUp] AS SET NOCOUNT, XACT_ABORT ON BEGIN TRY EXEC [REOMTE].[DB].[table].[sp] --This transaction should only begin if the remote procedure does not fail BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp1] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp2] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp3] COMMIT BEGIN TRAN EXEC [LOCAL].[DB].[table].[sp4] COMMIT END TRY BEGIN CATCH -- Insert error into log table INSERT INTO [dbo].[log_table] (stamp, errorNumber, errorSeverity, errorState, errorProcedure, errorLine, errorMessage) SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(), ERROR_LINE(), ERROR_MESSAGE() END CATCH GO When using a transaction on the remote procedure, it throws this error: OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.". I get that I'm unable to run a transaction locally for a remote procedure. How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?

    Read the article

  • Unobtrusive, self-hosted comments function to put onto existing web pages

    - by Pekka
    I am building a new site which will consist of a mix of dynamic and static pages. I would like to add commenting functionality to those pages with as little work as possible. I'm curious as to whether such a solution exists in PHP. The ideal set of features would be: Completely independent from the surrounding page / site: PHP code gets dropped into page, a page ID is added, done. Simple "write a comment" form Comments for each page are displayed using a PHP function Nice, clean output of <ul><li>.... that can be styled by the surrounding site Optional Captcha Optional Gravatar sensitivity Minimalistic administration area to moderate/delete comments, no ACL, can protect it using .htaccess The ideal integreation would be like this: <?php show_comments("my_page_name"); ?> this would 1. display a form to add a new comment that gets automatically associtated with my_page_name; and 2. display all comments that were made through this form using this ID. Does anybody know a solution like this? Bounty I am setting up a bounty because while there were some good suggestions, they all point to external services. I'm really curious to see whether there isn't anything self-hosted around. If this doesn't exist yet, it sure would be great to see as an Open Source project.

    Read the article

  • PySide Qt script doesn't launch from Spyder but works from shell

    - by Maxim Zaslavsky
    I have a weird bug in my project that uses PySide for its Qt GUI, and in response I'm trying to test with simpler code that sets up the environment. Here is the code I am testing with: http://stackoverflow.com/a/6906552/130164 When I launch that from my shell (python test.py), it works perfectly. However, when I run that script in Spyder, I get the following error: Traceback (most recent call last): File "/home/test/Desktop/test/test.py", line 31, in <module> app = QtGui.QApplication(sys.argv) RuntimeError: A QApplication instance already exists. If it helps, I also get the following warning: /usr/lib/pymodules/python2.6/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. Why does that code work when launched from my shell but not from Spyder? Update: Mata answered that the problem happens because Spyder uses Qt, which makes sense. For now, I've set up execution in Spyder using the "Execute in an external system terminal" option, which doesn't cause errors but doesn't allow debugging, either. Does Spyder have any built-in workarounds to this?

    Read the article

  • Is private members hacking a defined behaviour ?

    - by ereOn
    Hi, Lets say I have the following class: class BritneySpears { public: int getValue() { return m_value; }; private: int m_value; }; Which is an external library (that I can't change). I obviously can't change the value of m_value, only read it. Even subclassing BritneySpears won't work. What if I define the following class: class AshtonKutcher { public: int getValue() { return m_value; }; public: int m_value; }; And then do: BritneySpears b; // Here comes the ugly hack AshtonKutcher* a = reinterpret_cast<AshtonKutcher*>(&b); a->m_value = 17; // Print out the value std::cout << b.getValue() << std::endl; I know this is a bad practice. But just for curiosity: is this guaranted to work ? Is it a defined behaviour ? Bonus question: Have you ever had to use such an ugly hack ? Thanks !

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >