Search Results

Search found 3551 results on 143 pages for 'canonical sources'.

Page 117/143 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • Programmatically convert *.odt file to MS Word *.doc file using an OpenOffice.org basic macro

    - by Chen Levy
    I am trying to build a reStructuredText to MS Word document tool-chain, so I will be able to save only the rst sources in version control. So far I -- Have rst2odt.py to convert reStructuredText to OpenOffice.org Writer format. Next I want to use the most recent OpenOffice.org (currently 3.1) that do a pretty decent work of generating a Word 97/2000/XP document, so I wrote the macro: sub ConvertToWord(file as string) rem ---------------------------------------------------------------------- rem define variables dim document as object dim dispatcher as object rem ---------------------------------------------------------------------- rem get access to the document document = ThisComponent.CurrentController.Frame dispatcher = createUnoService("com.sun.star.frame.DispatchHelper") rem ---------------------------------------------------------------------- dim odf(1) as new com.sun.star.beans.PropertyValue odf(0).Name = "URL" odf(0).Value = "file://" + file + ".odt" odf(1).Name = "FilterName" odf(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:Open", "", 0, odf()) rem ---------------------------------------------------------------------- dim doc(1) as new com.sun.star.beans.PropertyValue doc(0).Name = "URL" doc(0).Value = "file://" + file + ".doc" doc(1).Name = "FilterName" doc(1).Value = "MS Word 97" dispatcher.executeDispatch(document, ".uno:SaveAs", "", 0, doc()) end sub But when I executing it: soffice "macro:///Standard.Module1.ConvertToWord(/path/to/odt_file_wo_ext)" I get a: "BASIC runtime error. Property or method not found." message On the line: document = ThisComponent.CurrentController.Frame And when I comment that line, the above invocation complete without error, but do nothing. I guess I need to somehow set the value of document to a newly created instance, but I don't know how to do it. Or am I going at it at a completely backward way? P.S. I will consider JODConverter as a fallback, because I try to minimize my dependencies.

    Read the article

  • Using OpenCV in QTCreator (linking problem)

    - by Jane
    Greetings! I have a problem with the linking simpliest test program in QTCreator: CODE: #include <QtCore/QCoreApplication> #include <cv.h> #include<highgui.h> #include <cxcore.hpp> using namespace cv; int _tmain(int argc, _TCHAR* argv[]) { cv::Mat M(7,7,CV_32FC2,Scalar(1,3)); return 0; } .pro file: QT -= gui TARGET = testopencv CONFIG += console CONFIG -= app_bundle INCLUDEPATH += C:/OpenCV2_1/include/opencv TEMPLATE = app LIBS += C:/OpenCV2_1/lib/cxcore210d.lib \ C:/OpenCV2_1/lib/cv210d.lib \ C:/OpenCV2_1/lib/highgui210d.lib\ C:/OpenCV2_1/lib/cvaux210d.lib SOURCES += main.cpp I've tried to use -L and -l like LIBS+= -LC:/OpenCV2_1/lib -lcxcored ang .pri file QMAKE_LIBDIR += C:/OpenCV2_1/lib/Debug LIBS += -lcxcore210d \ -lcv210d \ -lhighgui210d The errors are like debug/main.o:C:\griskin\test\app\testopencv/../../../../OpenCV2_1/include/opencv/cxcore.hpp:97: undefined reference to cv::format(char const*, ...)' Could anyone help me? Thanks! In Visual Studio it works but I need it works in QTCreator..

    Read the article

  • Schliemann's method of programming language learning

    - by DVK
    Background: 19th-century German archeologist Heinrich Schliemann was of course famous for his successful quest to find and excavate the city of Troy (an actual archeological site for the Troy of Homer's Iliad). However, he is just as famous for being an astonishing learner of languages - within the space of two years, he taught himself fluent Dutch, English, French, Spanish, Italian and Portuguese, and later went on to learn seven more, including both modern and ancient Greek. One of the methods he famously used was comparison of a known text, e.g. take a book in a language one is fluent in, take a good translation of a book in a language you wish to learn, and go over them in parallel. (various sources cited the book used by Schliemann to be the Bible, or, as the link above states, a novel). Now, for the actual question. Has anyone used (or heard of) an equivalent of Schliemann's method for learning a new programming language? E.g. instead of basing the leaning on references and tutorials, take a somewhat comprehensive set of programs known to have high-quality code in both languages implementing similar/identical algorithms and learn by comparing them? I'm curious about either personal experiences of applying such an approach, or references to something published, or existance of codebases which could be used for such an approach? What got me thinking about the idea was Project Euler and some code snippets I saw on SO, in C++, Perl and Lisp.

    Read the article

  • TWAIN scanning components for Delphi.

    - by Larry Lustig
    I need to add TWAIN scanning to an Delphi application and am having trouble locating an off-the-shelf component to do so. I downloaded TDelphiTwain but, when used in D2010 on Windows Vista, it does not appear to recognize any Twain sources on my system. I also tried the trial version of Twain Toolkit for Delphi from MCM which has the advantage of being currently updated (DelphiTwain's last update was 2004), but the Twain Toolkit will not even compile on my system (I think some of the DCUs are out of date). Can anyone suggest a simple method of getting TWAIN scanning into my Delphi application? UPDATE: Using vcldeveloper's update to DelphiTwain (see below) I was able to get this working. Also, I also discovered that Envision Image Library supports Twain scanning as well as assisting in handling multi-page TIFFs, and has been updated for compatibility with D2010. Late Breaking UPDATE VCLDeveloper's code, below, works fine. However, I settled on Envision library which includes the ability to easily create multi-page TIFF files. I got the Envision scanning, including multi-page file handling, implemented in a few hours.

    Read the article

  • How To Create Your Own x86 Operating System for Modern PC Computers

    - by mudge
    I'd like to create a new operating system for x86 PC computers. I'd like it to be 64-bit but possibly run as 32-bit as well. I have these kinds of questions: What kinds of things do you start working on first? Knowing where to start in writing your own operating system seems to me to be a tricky subject, so I am interested in your input. Generally how to go about making your own 32-bit/64-bit operating system, or good resources that mention useful information about going about writing your own operating system for x86 computers. I don't care how old sources are as long as they are still relevant and useful to what I am doing. I know that I will want it to have kernel drivers that access peripheral hardware directly. Where should I look for advice and documentation for programming and understanding the interface to peripheral hardware the operating system will communicate with? I will need to understand how the operating system will receive input and interact with keyboards, mice, computer monitors, hard drives, USB, etc. etc. This is probably the area I know least about. I have the Intel instruction set manuals and have been getting more familiar with assembly programming, so the CPU side of things is what I know the most about. At this point I'm thinking that I'd like to implement the Linux system calls within my operating system so that programs that run on Linux can run on my operating system. I want my operating system to use the ELF binary format. I wonder what obstacles I have to overcome to achieve this Linux compatibility. Are the main things implementing the system calls that Linux provides, and using the ELF format? What else? I am also interested in people's thoughts about why it might not be a good idea to make your own operating system, and why it is a good idea to make your own operating system. Thank you for any input.

    Read the article

  • Bluetooth development on Windows mobile 6 C#

    - by cheesebunz
    Hi everyone, i recently started on a project which is a puzzle slider game. This application will be using the Bluetooth, and i'm working on the mobile,Samsung omnia i900. This is how my application will work. Any user with this Samsung device plays the game and starts sliding the tiles. There is an option to search and connect to other users with the same device and application, so that they can solve the puzzle together. Right now, i'm working on the Bluetooth Part but am still new to the API. I'm using the 32feet.NET inthehandpersonal.net class library while encountering much difficulties. I am able to search for devices by using: private void btnSearch_Click(object sender, EventArgs e) { BluetoothRadio.PrimaryRadio.Mode = RadioMode.Discoverable; BluetoothRadio myRadio = BluetoothRadio.PrimaryRadio; lblSearch.Text = "" + myRadio.LocalAddress.ToString(); bluetoothClient = new BluetoothClient(); Cursor.Current = Cursors.WaitCursor; BluetoothDeviceInfo[] bluetoothDeviceInfo = { }; bluetoothDeviceInfo = bluetoothClient.DiscoverDevices(10); comboBox1.DataSource = bluetoothDeviceInfo; comboBox1.DisplayMember = "DeviceName"; comboBox1.ValueMember = "DeviceAddress"; comboBox1.Focus(); Cursor.Current = Cursors.Default; } Well, to be honest this is a rip off from some sources i found on the internet but i do understand this part. Next i went on to trying to sending a simple "testing.txt" file and i'm stucked at it. I think i will be using something like the OBEX and Obexwebrequest, obexwebresponse, Uri etc. Could anyone explain it in simple terms for me so that i could understand what they are so that i could continue pairing and etc on Bluetooth development. Sorry making it this long, really appreciate if anyone did waste some time reading it :). Hope alanM sees this :) i'm using their bluetooth library.

    Read the article

  • IIS: No Session being handed out, but only in production

    - by Wayne
    I've reproduced this in a simple project - details below. It's a WCF service in ASP.NET compatibility mode. What I'm seeing is that when run on the dev machine (Win7), a HTTP session id is available inside the service operation (HttpContext.Current.Session is non-null). But when deployed to the server (Win2k8R2), I get "No session". On both machines the app is configured to use the classic app pool, and the app pools themselves are configured identically as far as I can tell. The only differences I can discern between the two applications is that on the dev box, under "Handler Mappings", ISAPI-dll is disabled (not on the server), and on the server there's a spurious handler called "AboMapperCustom-7105160" (does not exist on the dev box). What should I be looking at next? Am I missing something head-slappingly simple? Service is this: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class Service2 { [OperationContract] public string DoWork() { if (HttpContext.Current != null) { if (HttpContext.Current.Session != null) { return "SessionId: " + HttpContext.Current.Session.SessionID; } else { return "No Session"; } } else { return "No Context"; } } } Config is: <?xml version="1.0" encoding="UTF-8"?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net, Version=1.2.9.0, Culture=neutral, PublicKeyToken=b32731d11ce58905" /> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <log4net> <appender name="LogFile" type="log4net.Appender.RollingFileAppender"> <file value="C:\Temp\Test.log4net.log" /> <rollingStyle value="Once" /> <maxSizeRollBackups value="10" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{ISO8601} [%5t] %-5p %c{1} %m%n" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref ref="LogFile" /> </root> </log4net> <appSettings /> <connectionStrings /> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. --> <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="WarnAsError" value="false" /> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules> <remove name="ScriptModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <runtime> <assemblyBinding appliesTo="v2.0.50727" xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_Service2" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" /> </security> </binding> </basicHttpBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <behaviors> <serviceBehaviors> <behavior name="WebApplication3.Service2Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="WebApplication3.Service2Behavior" name="WebApplication3.Service2"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_Service2" contract="WebApplication3.Service2" /> </service> </services> </system.serviceModel> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\Temp\Test2.svclog" /> </listeners> </source> </sources> <trace autoflush="true" indentsize="4"> <listeners> <add name="traceListener2" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\Temp\Test.log" traceOutputOptions="DateTime" /> </listeners> </trace> </system.diagnostics> </configuration> Testing with a simple console app: class Program { static void Main(string[] args) { ServiceReference1.Service2Client client = new ServiceReference1.Service2Client(); Console.WriteLine(client.DoWork()); Console.ReadKey(); } }

    Read the article

  • WPF Repeater (like) control for collection source??

    - by Sonic Soul
    I have a WPF DataGrid bound to ObservableCollection. Each item in my collection has Property which is a List. In my row details pane, i would like to write out formatted textblocks for each item in this collection. The end result would be something equivalent to: <TextBlock Style="{StaticResource NBBOTextBlockStyle}" HorizontalAlignment="Right"> <TextBlock.Inlines> <Run FontWeight="Bold" Text="{Binding Path=Exchanges[0].Name}" /> <Run FontWeight="Bold" Text="{Binding Path=Exchanges[0].Price}" /> <LineBreak /> <Run Foreground="LightGray" Text="{Binding Path=Exchanges[0].Quantity}" /> </TextBlock.Inlines> </TextBlock> <TextBlock Style="{StaticResource NBBOTextBlockStyle}"> <TextBlock.Inlines> <Run FontWeight="Bold" Text="{Binding Path=Exchanges[1].Name}" /> <Run FontWeight="Bold" Text="{Binding Path=Exchanges[1].Price}" /> <LineBreak /> <Run Foreground="LightGray" Text="{Binding Path=Exchanges[1].Quantity}" /> </TextBlock.Inlines> </TextBlock> and so on 0-n times. I've tried using ItemsControl for this: <ItemsControl ItemsSource="{Binding Path=Exchanges}"> <DataTemplate> <Label>test</Label> </DataTemplate> </ItemsControl> however, this appears to be only meant for more static sources, as it throws the following exception (collection is not altered after creation): ItemsControl Operation is not valid while ItemsSource is in use. Access and modify elements with ItemsControl.ItemsSource instead Is there another way to achieve this?

    Read the article

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • Guidelines for using Merge task in SSIS

    - by thursdaysgeek
    I have a table with three fields, one an identity field, and I need to add some new records from a source that has the other two fields. I'm using SSIS, and I think I should use the merge tool, because one of the sources is not in the local database. But, I'm confused by the merge tool and the proper process. I have my one source (an Oracle table), and I get two fields, well_id and well_name, with a sort after, sorting by well_id. I have the destination table (sql server), and I'm also using that as a source. It has three fields: well_key (identity field), well_id, and well_name, and I then have a sort task, sorting on well_id. Both of those are input to my merge task. I was going to output to a temporary table, and then somehow get the new records back into the sql server table. Oracle Well SQL Well | | V V Sort Source Sort Well | | -------> Merge* <----------- | V Temp well table I suspect this isn't the best way to use this tool, however. What are the proper steps for a merge like this? One of my reasons for questioning this method is that my merge has an error, telling me that the "Merge Input 2" must be sorted, but its source is a sort task, so it IS sorted. Example data SQL Well (before merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d Oracle Well well_id well_name 123 well k 292 well c 311 well y 344 well t 439 well d 532 well j SQL Well (after merge) well_key well_id well_name 1 123 well k 2 292 well c 3 344 well t 5 439 well d 6 311 well y 7 532 well j Would it be better to load my Oracle Well to a temporary local file, and then just use a sql insert statment on it?

    Read the article

  • JOGL Double Buffering

    - by Bar
    What is eligible way to implement double buffering in JOGL (Java OpenGL)? I am trying to do that by the following code: ... /** Creating canvas. */ GLCapabilities capabilities = new GLCapabilities(); capabilities.setDoubleBuffered(true); GLCanvas canvas = new GLCanvas(capabilities); ... /** Function display(…), which draws a white Rectangle on a black background. */ public void display(GLAutoDrawable drawable) { drawable.swapBuffers(); gl = drawable.getGL(); gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT); gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f); gl.glColor3f(1.0f, 1.0f, 1.0f); gl.glBegin(GL.GL_POLYGON); gl.glVertex2f(-0.5f, -0.5f); gl.glVertex2f(-0.5f, 0.5f); gl.glVertex2f(0.5f, 0.5f); gl.glVertex2f(0.5f, -0.5f); gl.glEnd(); } ... /** Other functions are empty. */ Questions: — When I'm resizing the window, I usually get flickering. As I see it, I have a mistake in my double buffering implementation. — I have doubt, where I must place function swapBuffers — before or after (as many sources says) the drawing? As you noticed, I use function swapBuffers (drawable.swapBuffers()) before drawing a rectangle. Otherwise, I'm getting a noise after resize. So what is an appropriate way to do that? Including or omitting the line capabilities.setDoubleBuffered(true) does not make any effect.

    Read the article

  • Examples of localization in Perl using gettext and Locale::TextDomain, with fallback if Locale::Text

    - by Jakub Narebski
    The "On the state of i18n in Perl" blog post from 26 April 2009 recommends using Locale::TextDomain module from libintl-perl distribution for l10n / i18n in Perl. Besides I have to use gettext anyway, and gettext support in Locale::Messages / Locale::TextDomain is more natural than in gettext emulation in Locale::Maketext. The subsection "15.5.18 Perl" in chapter "15 Other Programming Languages" in GNU gettext manual says: Portability The libintl-perl package is platform independent but is not part of the Perl core. The programmer is responsible for providing a dummy implementation of the required functions if the package is not installed on the target system. However neither of two examples in examples/hello-perl in gettext sources (one using lower level Locale::Messages, one using higher level Locale::TextDomain) includes detecting if the package is installed on the target system, and providing dummy implementation if it is not. What is complicating matter (with respect to detecting if package is installed or not) is the following fragment of Locale::TextDomain manpage: SYNOPSIS use Locale::TextDomain ('my-package', @locale_dirs); use Locale::TextDomain qw (my-package); USAGE It is crucial to remember that you use Locale::TextDomain(3) as specified in the section "SYNOPSIS", that means you have to use it, not require it. The module behaves quite differently compared to other modules. Could you please tell me how one should detect if libintl-perl is present on target system, and how to provide dummy fallthrough implementation if it is not installed? Or give examples of programs / modules which do this?

    Read the article

  • What GUI tool can I use for building applications that interact with multiple APIs?

    - by tarasm
    My company uses a lot of different web services on daily bases. I find that I repeat same steps over and over again on daily bases. For example, when I start a new project, I perform the following actions: Create a new client & project in Liquid Planner. Create a new client Freshbooks Create a project in Github or Codebasehq Developers to Codebasehq or Github who are going to be working on this project Create tasks in Ticketing system on Codebasehq and tasks in Liquid Planner This is just when starting new projects. When I have to track tasks, it gets even trickier because I have to monitor tasks in 2 different systems. So my question is, is there a tool that I can use to create a web service that will automate some of these interactions? Ideally, it would be something that would allow me to graphically work with the web service API and produce an executable that I can run on a server. I don't want to build it from scratch. I know, I can do it with Python or RoR, but I don't want to get that low level. I would like to add my sources and pass data around from one service to another. What could I use? Any suggestions?

    Read the article

  • What do I do about recurring billing?

    - by phidah
    This might be a subjective question, but I'll give it a go. There are already a number of questions on SO that revolves around subscription billing management. I am currently working on a SaaS solution that will require a fully automated billing system. What I am not looking for when asking this question is not advice on implementing towards a specific payment gateway or stuff like that. Instead I'd like advice on what kind of approach to take. The functionality that I need is a system that can handle upgrades, downgrades, recurring billing, cancellations, etc. Initially for one product only, but it might over time be a requirement that the system can handle multiple products (by products I mean fundamentally different products, not different variations of the same product). As I see it there are a number of possible approaches when you need a solution like this: Code a billing server yourself that supports this and is decoupled from each product so that it can handle multiple independent products. Use a hosted solution like Recurly, Chargify, Spreedly or CheddarGetter. The advantage of using a hosted solution is obviously that you don't need PCI certification, the concern is outsourced and it is a lot faster to get up and running. These advantages come at a cost however: The most important support function for your product - i.e. the billing is not in your control. Additionally you have less control and flexibility. What would you do? If we look beyond the PCI requirements I would definately prefer to have a system coded in-house that could do this kind of job. On the other hand I've heard from numerous sources that coding a system like this is a pain. Any advice is highly appreciated. Also, if you advice to code it yourself, any experiences on how to do it or if there are any opensource projects (no matter the language, what I'm after is not the code but the structure) that I can benefit from would really mean alot. Thanks in advance for your inputs! :-)

    Read the article

  • What is the difference between the Lego Mindstorms 1.0 and 2.0

    - by Gavimoss
    I am thinking about buying a mindstorms kit (I don't currently own one but I have used 1.0 at university) and I am a bit unsure as to the benefits of 2.0 over 1.0. I have seen other posts on the subject all saying generally 2.0 is better but I have some more specific questions about this that I can't seem to find any answers on. Apart from the different lego pieces and sensors you get with the 2.0 kit, is there any difference between a 1.0 nxt brick and 2.0 nxt brick? From what I can determine from other sources, they are the same except for the firmware installed. Am I right in saying I could buy a 1.0 kit and install the same firmware that comes with the 2.0 kit and the bricks would be the same or is the 1.0 brick not compatible with the 2.0 firmware??? Also, I plan to use a different programming language like c or java so I need to install specific firmware for that anyway like librcx or lejos right? So if using c or java as opposed to the provided lego coding methods it doesn't matter if I am using 1.0 or 2.0 (except for the lego pieces in the kit) am I right? In a nutshell, assuming I am using librcx or lejos and I don't care about the sensors and lego pieces included, is there any benefit to buying a 2.0 kit over the 1.0 kit? Thanks in advance

    Read the article

  • RapidXML, reading and saving values

    - by Layne
    Hello, I've worked myself through the rapidXML sources and managed to read some values. Now I want to change them and save them to my XML file: Parsing file and set a pointer void SettingsHandler::getConfigFile() { pcSourceConfig = parsing->readFileInChar(CONF); cfg.parse<0>(pcSourceConfig); } Reading values from XML void SettingsHandler::getDefinitions() { SettingsHandler::getConfigFile(); stGeneral = cfg.first_node("settings")->value(); /* stGeneral = 60 */ } Changing values and saving to file void SettingsHandler::setDefinitions() { SettingsHandler::getConfigFile(); stGeneral = "10"; cfg.first_node("settings")->value(stGeneral.c_str()); std::stringstream sStream; sStream << *cfg.first_node(); std::ofstream ofFileToWrite; ofFileToWrite.open(CONF, std::ios::trunc); ofFileToWrite << "<?xml version=\"1.0\"?>\n" << sStream.str() << '\0'; ofFileToWrite.close(); } Reading file into buffer char* Parser::readFileInChar(const char* p_pccFile) { char* cpBuffer; size_t sSize; std::ifstream ifFileToRead; ifFileToRead.open(p_pccFile, std::ios::binary); sSize = Parser::getFileLength(&ifFileToRead); cpBuffer = new char[sSize]; ifFileToRead.read( cpBuffer, sSize); ifFileToRead.close(); return cpBuffer; } However, it's not possible to save the new value. My code is just saving the original file with a value of "60" where it should be "10". Rgds Layne

    Read the article

  • Pasting formatted Excel range into Outlook message

    - by Steph
    Hi everyone, I am using Office 2007 and I would like to use VBA to paste a range of formatted Excel cells into an Outlook message and then mail the message. In the following code (that I lifted from various sources), it runs without error and then sends an empty message... the paste does not work. Can anyone see the problem and better yet, help with a solution? Thanks, -Steph Sub SendMessage(SubjectText As String, Importance As OlImportance) Dim objOutlook As Outlook.Application Dim objOutlookMsg As Outlook.MailItem Dim objOutlookRecip As Outlook.Recipient Dim objOutlookAttach As Outlook.Attachment Dim iAddr As Integer, Col As Integer, SendLink As Boolean 'Dim Doc As Word.Document, wdRn As Word.Range Dim Doc As Object, wdRn As Object ' Create the Outlook session. Set objOutlook = CreateObject("Outlook.Application") ' Create the message. Set objOutlookMsg = objOutlook.CreateItem(olMailItem) Set Doc = objOutlookMsg.GetInspector.WordEditor 'Set Doc = objOutlookMsg.ActiveInspector.WordEditor Set wdRn = Doc.Range wdRn.Paste Set objOutlookRecip = objOutlookMsg.Recipients.Add("[email protected]") objOutlookRecip.Type = 1 objOutlookMsg.Subject = SubjectText objOutlookMsg.Importance = Importance With objOutlookMsg For Each objOutlookRecip In .Recipients objOutlookRecip.Resolve ' Set the Subject, Body, and Importance of the message. '.Subject = "Coverage Requests" 'objDrafts.GetFromClipboard Next .Send End With Set objOutlookMsg = Nothing Set objOutlook = Nothing End Sub

    Read the article

  • Bamboo to Build Specific SVN Revision

    - by Anton Gogolev
    Hi! Imagine there's a project in Bamboo with two build plans: Staging Deployment (SD) and Production Deployment (PD). Building SD checks out latest sources, builds them and deploys a web site to a staging server. Currently, PD does all the same, namely deploys the latest version of a web site to a production server. Clearly, this is not very good: I want to be able to deploy the same exact version of a web site that was previously deployed on a staging server, not the latest one. To illustrate: suppose we're at r101 in SVN repo. Clicking "Build SD" will deploy a web site version, say, 2.1.0.101 to staging server. Now we commit a breaking change and end up at r102. Now I want to deploy to a production server. If I hit "Build PD", Bamboo will happily check out r102 and build it, resulting in version 2.1.0.102 being deployed to a production server. What I want it to do, however, is to build and deploy a version which was previously built in an SD plan (that is, 2.1.0.101). Of course I can make SD plan to tag latest-successful build as tags/builds/latest, but I would rather have Bamboo itself handle that.

    Read the article

  • Command /Developer/usr/bin/dsymutil failed with exit code 10

    - by Evan Robinson
    I am getting the above error message randomly (so far as I can tell) on iPhone projects. Occasionally it will go away upon either: Clean Restart XCode Reboot Reinstall XCode But sometimes it won't. When it won't the only solution I have found is to take all the source material, import it into a new project, and then redo all the connections in IB. Then I'm good until it strikes again. Anybody have any suggestions? [update 20091030] I have tried building both debug and release versions, both full and lite versions. I've also tried switching the debug symbols from DWARF with external dSYM file to DWARF and to stabs. Clean builds in all formats make no differences. Permission repairs change nothing. Setting up a new user has no effect. Same error on the builds. Thanks for the suggestions! [Update 20091031] Here's an easier and (apparently) reliable workaround. It hinges upon the discovery that the problem is linked to a target not a project In the same project file, create a new target Option-Drag (copy) all the files from the BAD target 'Copy Bundle Resources' folder to the NEW target 'Copy Bundle Resources' folder Repeat (2) with 'Compile Sources' and 'Link Binary With Libraries' Duplicate the Info.plist file for the BAD target and name it correctly for the NEW target. Build the NEW target! [Update 20100222] Apparently an IDE bug, now apparently fixed, although Apple does not allow direct access to the original bug of a duplicate. I can no longer reproduce this behaviour, so hopefully it is dead, dead, dead.

    Read the article

  • How to configure hbm2java and hbm2dao to add packagename to generated classes

    - by mmm
    Hi, I'm trying to configure hbm2java with maven to generate POJO classes and DAO objects. One of the issues I'm dealing with is package names aren't generated. I'm using the following pom for that: <execution> <id>hbm2java</id> <phase>generate-sources</phase> <goals> <goal>hbm2java</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2java</name> <implementation>configuration</implementation> </component> </components> <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernane.cfg.xml</configurationfile> </componentProperties> </configuration> </execution> Yet the generated code begins with the following: // default package // Generated 2010-05-17 13:11:51 by Hibernate Tools 3.2.2.GA /** * Messages generated by hbm2java */ public class Messages implements java.io.Serializable { Is there a way to force maven to generate the package part as defined in packagename?

    Read the article

  • What are good NoSQL and non-relational database solutions for audit/logging database

    - by Juha Syrjälä
    What would be suitable database for following? I am especially interested about your experiences with non-relational NoSQL systems. Are they any good for this kind of usage, which system you have used and would recommend, or should I go with normal relational database (DB2)? I need to gather audit trail/logging type information from bunch of sources to a centralized server where I could generate reports efficiently and examine what is happening in the system. Typically a audit/logging event would consist always of some mandatory fields, for example globally unique id (some how generated by program that generated this event) timestamp event type (i.e. user logged in, error happened etc) some information about source (server1, server2) Additionally the event could contain 0-N key-value pairs, where value might be up to few kilobytes of text. It must run on Linux server It should work with high amount of data (100GB for example) it should support some kind of efficient full text search It should allow concurrent reading and writing It should be flexible to add new event types and add/remove key-value pairs to new events. Flexible=no changes should be required to database schema, application generating the events can just add new event types/new fields as needed. it should be efficient to make queries against database. For reporting and exploring what happened. For example: How many events with type=X occurred in some time period. Get all events where field A has value Y. Get all events with type X and field A has value 1 and field B is not 2 and event occurred in last 24h

    Read the article

  • ADODB Connection String: Workgroup Information file is Missing?

    - by Mohgeroth
    I have a few data sources in access that I need to connect to programatically to do things with behind the scenes and keep visibility away from users. Said datasource has a password 'pass' as I'm going to call it here. Using this connection method I get an error attempting to use the open method Dim conn as ADODB.Connection Set ROBBERS.conn = New ADODB.Connection conn.open "Provider=Microsoft.Jet.OLEDB.4.0;" _ & "Data Source=\\pep-home\projects\billing\autobilling\DPBilling2.mdb;" _ & "Jet OLEDB:Database Password=pass;", "admin", "pass" "Cannot start your application. The workgroup information file is missing or opened exclusively by another user." Due to planning to move into 2007, we are not using nor have ever used a workgroup identification file through access. The database password on the data source was set through the Set Databa Password which had to be done on an exclusive open. Ive spent a good while changing around my connection options, where to put the passwords etc and either cannot find the right format, or (why I'm asking here) I think there may be some other unknown that I must setup to do this. Anyone out there got some useful information?

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >