Search Results

Search found 2655 results on 107 pages for 'conversion'.

Page 98/107 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Error in creating template class

    - by Luciano
    I found this vector template class implementation, but it doesn't compile on XCode. Header file: // File: myvector.h #ifndef _myvector_h #define _myvector_h template <typename ElemType> class MyVector { public: MyVector(); ~MyVector(); int size(); void add(ElemType s); ElemType getAt(int index); private: ElemType *arr; int numUsed, numAllocated; void doubleCapacity(); }; #include "myvector.cpp" #endif Implementation file: // File: myvector.cpp #include <iostream> #include "myvector.h" template <typename ElemType> MyVector<ElemType>::MyVector() { arr = new ElemType[2]; numAllocated = 2; numUsed = 0; } template <typename ElemType> MyVector<ElemType>::~MyVector() { delete[] arr; } template <typename ElemType> int MyVector<ElemType>::size() { return numUsed; } template <typename ElemType> ElemType MyVector<ElemType>::getAt(int index) { if (index < 0 || index >= size()) { std::cerr << "Out of Bounds"; abort(); } return arr[index]; } template <typename ElemType> void MyVector<ElemType>::add(ElemType s) { if (numUsed == numAllocated) doubleCapacity(); arr[numUsed++] = s; } template <typename ElemType> void MyVector<ElemType>::doubleCapacity() { ElemType *bigger = new ElemType[numAllocated*2]; for (int i = 0; i < numUsed; i++) bigger[i] = arr[i]; delete[] arr; arr = bigger; numAllocated*= 2; } If I try to compile as is, I get the following error: "Redefinition of 'MyVector::MyVector()'" The same error is displayed for every member function (.cpp file). In order to fix this, I removed the '#include "myvector.h"' on the .cpp file, but now I get a new error: "Expected constructor, destructor, or type conversion before '<' token". A similar error is displayed for every member as well. Interestingly enough, if I move all the .cpp code to the header file, it compiles fine. Does that mean I can't implement template classes in separate files?

    Read the article

  • wrapping boost::ublas with swig

    - by leon
    I am trying to pass data around the numpy and boost::ublas layers. I have written an ultra thin wrapper because swig cannot parse ublas' header correctly. The code is shown below #include <boost/numeric/ublas/vector.hpp> #include <boost/numeric/ublas/matrix.hpp> #include <boost/lexical_cast.hpp> #include <algorithm> #include <sstream> #include <string> using std::copy; using namespace boost; typedef boost::numeric::ublas::matrix<double> dm; typedef boost::numeric::ublas::vector<double> dv; class dvector : public dv{ public: dvector(const int rhs):dv(rhs){;}; dvector(); dvector(const int size, double* ptr):dv(size){ copy(ptr, ptr+sizeof(double)*size, &(dv::data()[0])); } ~dvector(){} }; with the SWIG interface that looks something like %apply(int DIM1, double* INPLACE_ARRAY1) {(const int size, double* ptr)} class dvector{ public: dvector(const int rhs); dvector(); dvector(const int size, double* ptr); %newobject toString; char* toString(); ~dvector(); }; I have compiled them successfully via gcc 4.3 and vc++9.0. However when I simply run a = dvector(array([1.,2.,3.])) it gives me a segfault. This is the first time I use swigh with numpy and not have fully understanding between the data conversion and memory buffer passing. Does anyone see something obvious I have missed? I have tried to trace through with a debugger but it crashed within the assmeblys of python.exe. I have no clue if this is a swig problem or of my simple wrapper. Anything is appreciated.

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • Connecting client (on VirtualBox) and server (on localhost) using CORBA - org.omg.CORBA.BAD_PARAM:

    - by yak
    Im working now on simple gui appllication in Java/C++ and CORBA. I want my client on VirtualBox connect to server on localhost. When I have a simple app, like a calc I wrote about earlier its just fine. But when it comes to run client which needs some args witch javas -cp option, Im getting errors. (Theres no such problem when I have both client and server in localhost!) My errors: WARNING: "IOP00100007: (BAD_PARAM) string_to_object conversion failed due to bad scheme name" org.omg.CORBA.BAD_PARAM: vmcid: OMG minor code: 7 completed: No at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.resolver.INSURLOperationImpl.operate(Unknown So urce) at com.sun.corba.se.impl.resolver.ORBInitRefResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.orb.ORBImpl.resolve_initial_references(Unknown Source) at ClientConnection.connect(ClientConnection.java:57) at Client.main(Client.java:295) Exception in thread "main" org.omg.CORBA.BAD_PARAM: vmcid: OMG minor code: 7 completed: No at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.logging.OMGSystemException.soBadSchemeName(Unkn own Source) at com.sun.corba.se.impl.resolver.INSURLOperationImpl.operate(Unknown So urce) at com.sun.corba.se.impl.resolver.ORBInitRefResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.resolver.CompositeResolverImpl.resolve(Unknown Source) at com.sun.corba.se.impl.orb.ORBImpl.resolve_initial_references(Unknown Source) at ClientConnection.connect(ClientConnection.java:57) at Client.main(Client.java:295) make[1]: *** [run] Error 1 ClientConnection.java:57 is a line objRef = clientORB.resolve_initial_references("NameService"); Client.java:295 is a line: ClientConnection.connect(args); A connect method is just an ordinary client-connection corba code. I ran my example: 1) C:\Temp\Client>java -cp .:../Dir1:../Dir2 Client -ORBInitRef NameService =corbaloc::192.168.56.1:2809/NameService Error: Could not find or load main class Client so its even didnt run at all .. 2) with the help of a Makefile: HOST = 192.168.56.1 PORT = 2809 NAMESERVICE = NameService run: java -cp .:../Dir1:../Dir2 Client -ORBInitRef NameService=corbaloc::$(HOST):$(PORT)/$(NAMESERVICE) by typing make run and then I got those error I posted earlier. Whats wrong? I mean, a simple code works fine but gui version doesnt want to ... is there a problem with -cp option? I cant change my apps' dir tree.

    Read the article

  • Using array as map value: Cant see the error

    - by Tom
    Hi all, Im trying to create a map, where the key is an int, and the value is an array int red[3] = {1,0,0}; int green[3] = {0,1,0}; int blue[3] = {0,0,1}; std::map<int, int[3]> colours; colours.insert(std::pair<int,int[3]>(GLUT_LEFT_BUTTON,red)); //THIS IS LINE 24 ! colours.insert(std::pair<int,int[3]>(GLUT_MIDDLE_BUTTON,blue)); colours.insert(std::pair<int,int[3]>(GLUT_RIGHT_BUTTON,green)); However, when I try to compile this code, I get the following error. g++ (Ubuntu 4.4.1-4ubuntu8) 4.4.1 In file included from /usr/include/c++/4.4/bits/stl_algobase.h:66, from /usr/include/c++/4.4/bits/stl_tree.h:62, from /usr/include/c++/4.4/map:60, from ../src/utils.cpp:9: /usr/include/c++/4.4/bits/stl_pair.h: In constructor ‘std::pair<_T1, _T2>::pair(const _T1&, const _T2&) [with _T1 = int, _T2 = int [3]]’: ../src/utils.cpp:24: instantiated from here /usr/include/c++/4.4/bits/stl_pair.h:84: error: array used as initializer /usr/include/c++/4.4/bits/stl_pair.h: In constructor ‘std::pair<_T1, _T2>::pair(const std::pair<_U1, _U2>&) [with _U1 = int, _U2 = int [3], _T1 = const int, _T2 = int [3]]’: ../src/utils.cpp:24: instantiated from here /usr/include/c++/4.4/bits/stl_pair.h:101: error: array used as initializer In file included from /usr/include/c++/4.4/map:61, from ../src/utils.cpp:9: /usr/include/c++/4.4/bits/stl_map.h: In member function ‘_Tp& std::map<_Key, _Tp, _Compare, _Alloc>::operator[](const _Key&) [with _Key = int, _Tp = int [3], _Compare = std::less<int>, _Alloc = std::allocator<std::pair<const int, int [3]> >]’: ../src/utils.cpp:30: instantiated from here /usr/include/c++/4.4/bits/stl_map.h:450: error: conversion from ‘int’ to non-scalar type ‘int [3]’ requested make: *** [src/utils.o] Error 1 I really cant see where the error is. Or even if there's an error. Any help (please include an explanation to help me avoid this mistake) will be appreciated. Thanks in advance.

    Read the article

  • PNGs alpha transparancy in AS3 - Unknown file-type

    - by WiseDonkey
    Hello there! After whittling down of the options we've encountered a problem with PNG's and ActionScript 3 (AS3). When loading a PNG 8 or PNG 32 with alpha transparancy we're getting the following error reported in Flash:- "Error #2124: Loaded file is an unknown type" Now, we're dealing with some legacy images, and it appears as though this problem isn't universal - some images believed to be 32bit alpha PNG are loading. BUT, some conclusions:- converting one image that was 32 bit alpha (NOT WORKING IN AS3) to PNG 8 index transparency DID work. And converting that same image to PNG 8 alpha DID NOT work. These all worked in AS2 There is no difference between the headers Headers of a Failing Image [0] => HTTP/1.1 200 OK [1] => Date: Tue, 06 Apr 2010 14:17:28 GMT [2] => Server: Apache/2.2.3 (Red Hat) [3] => Last-Modified: Tue, 06 Apr 2010 13:44:05 GMT [4] => ETag: "3700054-11d6-a3983340" [5] => Accept-Ranges: bytes [6] => Content-Length: 4566 [7] => Connection: close [8] => Content-Type: image/png Headers of a Working Image [0] => HTTP/1.1 200 OK [1] => Date: Tue, 06 Apr 2010 14:19:02 GMT [2] => Server: Apache/2.2.3 (Red Hat) [3] => Last-Modified: Fri, 30 Oct 2009 18:38:08 GMT [4] => ETag: "ba8057-65f2-5445c400" [5] => Accept-Ranges: bytes [6] => Content-Length: 26098 [7] => Connection: close [8] => Content-Type: image/png Any thoughts of a direction of further investigation or thoughts on a bewildering problem with little to no documentation; very warmly welcomed. EDIT Now it would appear as though something in the PHP conversion of the images is shafting; I use the following PHP to add alpha layers:- imagealphablending($image_p, false); ImageSaveAlpha($image_p, true); ImageFill($image_p, 0, 0, IMG_COLOR_TRANSPARENT);

    Read the article

  • Purpose of Explicit Default Constructors

    - by Dennis Zickefoose
    I recently noticed a class in C++0x that calls for an explicit default constructor. However, I'm failing to come up with a scenario in which a default constructor can be called implicitly. It seems like a rather pointless specifier. I thought maybe it would disallow Class c; in favor of Class c = Class(); but that does not appear to be the case. Some relevant quotes from the C++0x FCD, since it is easier for me to navigate [similar text exists in C++03, if not in the same places] 12.3.1.3 [class.conv.ctor] A default constructor may be an explicit constructor; such a constructor will be used to perform default-initialization or value initialization (8.5). It goes on to provide an example of an explicit default constructor, but it simply mimics the example I provided above. 8.5.6 [decl.init] To default-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); 8.5.7 [decl.init] To value-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9) with a user-provided constructor (12.1), then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); In both cases, the standard calls for the default constructor to be called. But that is what would happen if the default constructor were non-explicit. For completeness sake: 8.5.11 [decl.init] If no initializer is specified for an object, the object is default-initialized; From what I can tell, this just leaves conversion from no data. Which doesn't make sense. The best I can come up with would be the following: void function(Class c); int main() { function(); //implicitly convert from no parameter to a single parameter } But obviously that isn't the way C++ handles default arguments. What else is there that would make explicit Class(); behave differently from Class();? The specific example that generated this question was std::function [20.8.14.2 func.wrap.func]. It requires several converting constructors, none of which are marked explicit, but the default constructor is.

    Read the article

  • conflicting declaration when filling a static std::map class member variable

    - by Max
    I have a class with a static std::map member variable that maps chars to a custom type Terrain. I'm attempting to fill this map in the class's implementation file, but I get several errors. Here's my header file: #ifndef LEVEL_HPP #define LEVEL_HPP #include <bitset> #include <list> #include <map> #include <string> #include <vector> #include "libtcod.hpp" namespace yarl { namespace level { class Terrain { // Member Variables private: std::bitset<5> flags; // Member Functions public: explicit Terrain(const std::string& flg) : flags(flg) {} (...) }; class Level { private: static std::map<char, Terrain> terrainTypes; (...) }; } } #endif and here's my implementation file: #include <bitset> #include <list> #include <map> #include <string> #include <vector> #include "Level.hpp" #include "libtcod.hpp" using namespace std; namespace yarl { namespace level { /* fill Level::terrainTypes */ map<char,Terrain> Level::terrainTypes['.'] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes[','] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes['\''] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes['`'] = Terrain("00001"); // clear map<char,Terrain> Level::terrainTypes[178] = Terrain("11111"); // wall (...) } } I'm using g++, and the errors I get are src/Level.cpp:15: error: conflicting declaration ‘std::map, std::allocator yarl::level::Level::terrainTypes [46]’ src/Level.hpp:104: error: ‘yarl::level::Level::terrainTypes’ has a previous declaration as ‘std::map, std::allocator yarl::level::Level::terrainTypes’ src/Level.cpp:15: error: declaration of ‘std::map, std::allocator yarl::level::Level::terrainTypes’ outside of class is not definition src/Level.cpp:15: error: conversion from ‘yarl::level::Terrain’ to non-scalar type ‘std::map, std::allocator ’ requested src/Level.cpp:15: error: ‘yarl::level::Level::terrainTypes’ cannot be initialized by a non-constant expression when being declared I get a set of these for each map assignment line in the implementation file. Anyone see what I'm doing wrong? Thanks for your help.

    Read the article

  • Constructor or Explicit cast

    - by Felan
    In working with Linq to Sql I create a seperate class to ferry data to a web page. To simplify creating these ferry objects I either use a specialized constructor or an explicit conversion operator. I have two questions. First which approach is better from a readibility perspective? Second while the clr code that is generated appeared to be the same to me, are there situations where one would be treated different than the other by the compiler (in lambda's or such). Example code (DatabaseFoo uses specialized constructor and BusinessFoo uses explicit operator): public class DatabaseFoo { private static int idCounter; // just to help with generating data public int Id { get; set; } public string Name { get; set; } public DatabaseFoo() { Id = idCounter++; Name = string.Format("Test{0}", Id); } public DatabaseFoo(BusinessFoo foo) { this.Id = foo.Id; this.Name = foo.Name; } } public class BusinessFoo { public int Id { get; set; } public string Name { get; set; } public static explicit operator BusinessFoo(DatabaseFoo foo) { return FromDatabaseFoo(foo); } public static BusinessFoo FromDatabaseFoo(DatabaseFoo foo) { return new BusinessFoo {Id = foo.Id, Name = foo.Name}; } } public class Program { static void Main(string[] args) { Console.WriteLine("Creating the initial list of DatabaseFoo"); IEnumerable<DatabaseFoo> dafoos = new List<DatabaseFoo>() { new DatabaseFoo(), new DatabaseFoo(), new DatabaseFoo(), new DatabaseFoo(), new DatabaseFoo(), new DatabaseFoo()}; foreach(DatabaseFoo dafoo in dafoos) Console.WriteLine(string.Format("{0}\t{1}", dafoo.Id, dafoo.Name)); Console.WriteLine("Casting the list of DatabaseFoo to a list of BusinessFoo"); IEnumerable<BusinessFoo> bufoos = from x in dafoos select (BusinessFoo) x; foreach (BusinessFoo bufoo in bufoos) Console.WriteLine(string.Format("{0}\t{1}", bufoo.Id, bufoo.Name)); Console.WriteLine("Creating a new list of DatabaseFoo by calling the constructor taking BusinessFoo"); IEnumerable<DatabaseFoo> fufoos = from x in bufoos select new DatabaseFoo(x); foreach(DatabaseFoo fufoo in fufoos) Console.WriteLine(string.Format("{0}\t{1}", fufoo.Id, fufoo.Name)); } }

    Read the article

  • How to remove music/videos DRM protection and convert to Mobile Devices such as iPod, iPhone, PSP, Z

    - by tonywesley
    The music/video files you purchased from online music stores like iTunes, Yahoo Music or Wal-Mart are under DRM protection. So you can't convert them to the formats supported by your own mobile devices such as Nokia phone, Creative Zen palyer, iPod, PSP, Walkman, Zune… You also can't share your purchased music/videos with your friends. The following step by step tutorial is dedicated to instructing music lovers to how to convert your DRM protected music/videos to mobile devices. Method 1: If you only want to remove DRM protection from your protected music, this method will not spend your money. Step 1: Burn your protected music files to CD-R/RW disc to make an audio CD Step 2: Find a free CD Ripper software to convert the audio CD track back to MP3, WAV, WMA, M4A, AAC, RA… Method 2: This guide will show you how to crack drm from protected wmv, wma, m4p, m4v, m4a, aac files and convert to unprotected WMV, MP4, MP3, WMA or any video and audio formats you like, such as AVI, MP4, Flv, MPEG, MOV, 3GP, m4a, aac, wmv, ogg, wav... I have been using Media Converter software, it is the quickest and easiest solution to remove drm from WMV, M4V, M4P, WMA, M4A, AAC, M4B, AA files by quick recording. It gets audio and video stream at the bottom of operating system, so the output quality is lossless and the conversion speed is fast . The process is as follows. Step 1: Download and install the software Step 2: Run the software and click "Add…" button to load WMA or M4A, M4B, AAC, WMV, M4P, M4V, ASF files Step 3: Choose output formats. If you want to convert protected audio files, please select "Convert audio to" list; If you want to convert protected video files, please select "Convert video to" list. Step 4: You can click "Settings" button to custom preference for output files. Click "Settings" button bellow "Convert audio to" list for protected audio files Click "Settings" button bellow "Convert video to" list for protected video files Step 5: Start remove DRM and convert your DRM protected music and videos by click on "Start" button. What is DRM? DRM, which is most commonly found in movies and music files, doesn't mean just basic copy-protection of video, audio and ebooks, but it basically means full protection for digital content, ranging from delivery to end user's ways to use the content. We can remove the Drm from video and audio files legally by quick recording.

    Read the article

  • VB - Convert Web Site to Web Application

    - by Dave
    Hi This is my first time doing VB :-) I've inherited a web site, which I've converted into a web application in VS2008. The conversion has worked for everything except a Gallery control. The compile error I'm getting is: Type 'Gallery' is not defined in file: gallery_oct07.aspx.designer.vb Option Strict On Option Explicit On Partial Public Class gallery_oct07 '''<summary> '''Gallery1 control. '''</summary> '''<remarks> '''Auto-generated field. '''To modify move field declaration from designer file to code-behind file. '''</remarks> Protected WithEvents Gallery1 As Global.Gallery End Class with squiggly lines under Global.Gallery. The gallery_oct07.aspx.vb is: Partial Class gallery_oct07 Inherits System.Web.UI.Page End Class Gallery.ascx is: <%@ Control Language="C#" AutoEventWireup="true" Codebehind="Gallery.ascx.cs" Inherits="WebApplication1.Gallery"%> <asp:Repeater runat="server" ID="rptGallery"> <HeaderTemplate> <ul class='<%#CssClass%>'> </HeaderTemplate> <ItemTemplate> <li><a href='<%#ImageFolder + Eval("Name") %>' class="thickbox" rel="gallery"><img src='<%#ImageFolder + "thumb/" + Eval("Name") %>' /></a></li> </ItemTemplate> <FooterTemplate> </ul></FooterTemplate> </asp:Repeater> and the code behind is: using System; using System.IO; namespace WebApplication1 { public partial class Gallery : System.Web.UI.UserControl { public string _ImageFolder; public string ImageFolder { get { return _ImageFolder; } set { _ImageFolder = value; } } private string _cssClass = "gallery"; public string CssClass { get { return _cssClass; } set { _cssClass = value; } } protected void Page_Load(object sender, EventArgs e) { DirectoryInfo dir = new DirectoryInfo(MapPath(ImageFolder)); FileInfo[] images = dir.GetFiles("*.jpg"); rptGallery.DataSource = images; rptGallery.DataBind(); } protected void Page_PreRender(object sender, EventArgs e) { } } } The feels like a namespace issue.. My project namespace is WebApplication1. Cheers!

    Read the article

  • How do I integrate a new MVC C# Project with an existing Web Forms VB.NET Web Application Project?

    - by Jordan Rieger
    We have a corporate website with a large amount of dynamic business application pages (e.g. Shopping Cart, Helpdesk, Product/Service management, Reporting, etc.) The site was built as an ASP.Net Web Application Project (WAP). Our systems have evolved over the years to use .NET 4.5 and various custom business logic DLLs (written in a mix of C# and VB.NET). However, the site itself is still using VB.NET Web Forms. We now have done a few side projects in MVC 4 using Razor/C#, and we want to use this framework for new pages on the main corporate site going forward. What would be the easiest way to achieve this? I found this nice list of steps to integrate MVC 4 into an existing Web Forms app. The problem is that because our existing app is a VB.NET WAP, it compiles into a single DLL, and .NET allows only one language per DLL. The site is way too big for us to contemplate converting it to C# all at once (yes, I've looked at the conversion tools, and they're good, but even 99% accuracy would leave us a huge amount of cleanup work.) I thought about converting the existing WAP into a Web Site Project (WSP) which does allow mixing languages and then following the steps above, but after a few pages of Google results, I couldn't find any steps for converting a WAP to WSP. (Plenty of sites offer the reverse steps: converting a WSP to a WAP.) Another idea I had was to create a completely separate MVC project, and then somehow squish them together into the same folder structure, where they would share the bin folder but compile to separate DLL's. I have no idea if this is possible, because certain files would collide (e.g. Global.asax, web.config, etc.) Finally, I can imagine a compromise solution where we keep all the MVC stuff in its own separate application under a subfolder of the main solution. We already use our own custom session state solution, so it wouldn't be difficult to pass data between the old site to the new pages. Which of the ideas above do you think makes the most sense for us? Is there another solution that I'm missing?

    Read the article

  • Cannot figure out how to take in generic parameters for an Enterprise Framework library sql statemen

    - by KallDrexx
    I have written a specialized class to wrap up the enterprise library database functionality for easier usage. The reasoning for using the Enterprise Library is because my applications commonly connect to both oracle and sql server database systems. My wrapper handles both creating connection strings on the fly, connecting, and executing queries allowing my main code to only have to write a few lines of code to do database stuff and deal with error handling. As an example my ExecuteNonQuery method has the following declaration: /// <summary> /// Executes a query that returns no results (e.g. insert or update statements) /// </summary> /// <param name="sqlQuery"></param> /// <param name="parameters">Hashtable containing all the parameters for the query</param> /// <returns>The total number of records modified, -1 if an error occurred </returns> public int ExecuteNonQuery(string sqlQuery, Hashtable parameters) { // Make sure we are connected to the database if (!IsConnected) { ErrorHandler("Attempted to run a query without being connected to a database.", ErrorSeverity.Critical); return -1; } // Form the command DbCommand dbCommand = _database.GetSqlStringCommand(sqlQuery); // Add all the paramters foreach (string key in parameters.Keys) { if (parameters[key] == null) _database.AddInParameter(dbCommand, key, DbType.Object, null); else _database.AddInParameter(dbCommand, key, DbType.Object, parameters[key].ToString()); } return _database.ExecuteNonQuery(dbCommand); } _database is defined as private Database _database;. Hashtable parameters are created via code similar to p.Add("@param", value);. the issue I am having is that it seems that with enterprise library database framework you must declare the dbType of each parameter. This isn't an issue when you are calling the database code directly when forming the paramters but doesn't work for creating a generic abstraction class such as I have. In order to try and get around that I thought I could just use DbType.Object and figure the DB will figure it out based on the columns the sql is working with. Unfortunately, this is not the case as I get the following error: Implicit conversion from data type sql_variant to varchar is not allowed. Use the CONVERT function to run this query Is there any way to use generic parameters in a wrapper class or am I just going to have to move all my DB code into my main classes?

    Read the article

  • AudioConverterConvertBuffer problem with insz error

    - by Samuel
    Hi Codegurus, I have a problem with the this function AudioConverterConvertBuffer. Basically I want to convert from this format _ streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked |0 ; _streamFormat.mBitsPerChannel = 16; _streamFormat.mChannelsPerFrame = 2; _streamFormat.mBytesPerPacket = 4; _streamFormat.mBytesPerFrame = 4; _streamFormat.mFramesPerPacket = 1; _streamFormat.mSampleRate = 44100; _streamFormat.mReserved = 0; to this format _streamFormatOutput.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked|0 ;//| kAudioFormatFlagIsNonInterleaved |0; _streamFormatOutput.mBitsPerChannel = 16; _streamFormatOutput.mChannelsPerFrame = 1; _streamFormatOutput.mBytesPerPacket = 2; _streamFormatOutput.mBytesPerFrame = 2; _streamFormatOutput.mFramesPerPacket = 1; _streamFormatOutput.mSampleRate = 44100; _streamFormatOutput.mReserved = 0; and what i want to do is to extract an audio channel(Left channel or right channel) from an LPCM buffer based on the input format to make it mono in the output format. Some logic code to convert is as follows This is to set the channel map for PCM output file SInt32 channelMap[1] = {0}; status = AudioConverterSetProperty(converter, kAudioConverterChannelMap, sizeof(channelMap), channelMap); and this is to convert the buffer in a while loop AudioBufferList audioBufferList; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer); for (int y=0; y<audioBufferList.mNumberBuffers; y++) { AudioBuffer audioBuffer = audioBufferList.mBuffers[y]; //frames = audioBuffer.mData; NSLog(@"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels); NSLog(@"The buffer size is %d",audioBuffer.mDataByteSize); numBytesIO = audioBuffer.mDataByteSize; convertedBuf = malloc(sizeof(char)*numBytesIO); status = AudioConverterConvertBuffer(converter, audioBuffer.mDataByteSize, audioBuffer.mData, &numBytesIO, convertedBuf); char errchar[10]; NSLog(@"status audio converter convert %d",status); if (status != 0) { NSLog(@"Fail conversion"); assert(0); } NSLog(@"Bytes converted %d",numBytesIO); status = AudioFileWriteBytes(mRecordFile, YES, countByteBuf, &numBytesIO, convertedBuf); NSLog(@"status for writebyte %d, bytes written %d",status,numBytesIO); free(convertedBuf); if (numBytesIO != audioBuffer.mDataByteSize) { NSLog(@"Something wrong in writing"); assert(0); } countByteBuf = countByteBuf + numBytesIO; But the insz problem is there... so it cant convert. I would appreciate any input Thanks in advance

    Read the article

  • WPF Reusing Xaml Effectively

    - by Steve
    Hi, I've recently been working on a project using WPF to produce a diagram. In this I must show text alongside symbols that illustrate information associated with the text. To draw the symbols I initially used some png images I had produced. Within my diagram these images appeared blurry and only looked worse when zoomed in on. To improve on this I decided I would use a vector rather than a rastor image format. Below is the method I used to get the rastor image from a file path: protected Image GetSymbolImage(string symbolPath, int symbolHeight) { Image symbol = new Image(); symbol.Height = symbolHeight; BitmapImage bitmapImage = new BitmapImage(); bitmapImage.BeginInit(); bitmapImage.UriSource = new Uri(symbolPath); bitmapImage.DecodePixelHeight = symbolHeight; bitmapImage.EndInit(); symbol.Source = bitmapImage; return symbol; } Unfortunately this does not recognise vector image formats. So instead I used a method like the following, where "path" is the file path to a vector image of the format .xaml: public static Canvas LoadXamlCanvas(string path) { //if a file exists at the specified path if (File.Exists(path)) { //store the text in the file string text = File.ReadAllText(path); //produce a canvas from the text StringReader stringReader = new StringReader(text); XmlReader xmlReader = XmlReader.Create(stringReader); Canvas c = (Canvas)XamlReader.Load(xmlReader); //return the canvas return c; } return null; } This worked but drastically killed performance when called repeatedly. I found the logic necessary for text to canvas conversion (see above) was the main cause of the performance problem therefore embedding the .xaml images would not alone resolve the performance issue. I tried using this method only on the initial load of my application and storing the resulting canvases in a dictionary that could later be accessed much quicker but I later realised when using the canvases within the dictionary I would have to make copies of them. All the logic I found online associated with making copies used a XamlWriter and XamlReader which would again just introduce a performance problem. The solution I used was to copy the contents of each .xaml image into its own user control and then make use of these user controls where appropriate. This means I now display vector graphics and performance is much better. However this solution to me seems pretty clumsy. I'm new to WPF and wonder if there is some built in way of storing and reusing xaml throughout an application? Apologies for the length of this question. I thought having a record of my attempts might help someone with any similar problem. Thanks.

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • C++ adding friend to a template class in order to typecast

    - by user1835359
    I'm currently reading "Effective C++" and there is a chapter that contains code similiar to this: template <typename T> class Num { public: Num(int n) { ... } }; template <typename T> Num<T> operator*(const Num<T>& lhs, const Num<T>& rhs) { ... } Num<int> n = 5 * Num<int>(10); The book says that this won't work (and indeed it doesn't) because you can't expect the compiler to use implicit typecasting to specialize a template. As a soluting it is suggested to use the "friend" syntax to define the function inside the class. //It works template <typename T> class Num { public: Num(int n) { ... } friend Num operator*(const Num& lhs, const Num& rhs) { ... } }; Num<int> n = 5 * Num<int>(10); And the book suggests to use this friend-declaration thing whenever I need implicit conversion to a template class type. And it all seems to make sense. But why can't I get the same example working with a common function, not an operator? template <typename T> class Num { public: Num(int n) { ... } friend void doFoo(const Num& lhs) { ... } }; doFoo(5); This time the compiler complaints that he can't find any 'doFoo' at all. And if i declare the doFoo outside the class, i get the reasonable mismatched types error. Seems like the "friend ..." part is just being ignored. So is there a problem with my understanding? What is the difference between a function and an operator in this case?

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

  • List<> of objects, different types, sort and pull out types individually?

    - by Brazos
    I've got a handful of products, any, all, or none of which may be associated with a specific submission. All 7 products are subclasses of the class Product. I need to store all the products associated with a submission, and then retrieve them and their field data on my presentation layer. I've been using a List, and List, but when I use the OfType, I throw an error saying that I can't implicitly convert systems.generic.IEnumerable to type 'Product'. I've tried to cast, but to no avail. When I use prodlist.OfType<EPL>(); there are no errors, but when I try and store that in an instance of EPL "tempEpl", I get the aforementioned cast-related error. What gives? Code below. ProductService pserv = new ProductService(); IList<object> prodlist = pserv.getProductById(x); EPL tempEpl = new EPL(); if ((prodlist.OfType<EPL>()) != null) { tempEpl = prodlist.OfType<EPL>(); // this throws a conversion error. } the Data layer List<object> TempProdList = new List<object>(); conn.Open(); SqlCommand EplCmd = new SqlCommand(EPLQuery, conn); SqlDataReader EplRead = null; EplRead = EplCmd.ExecuteReader(); EPL TempEpl = new EPL(); if (EplRead.Read()) { TempEpl.Entity1 = EplRead.GetString(0); TempEpl.Employees1 = EplRead.GetInt32(1); TempEpl.CA1 = EplRead.GetInt32(2); TempEpl.MI1 = EplRead.GetInt32(3); TempEpl.NY1 = EplRead.GetInt32(4); TempEpl.NJ1 = EplRead.GetInt32(5); TempEpl.PrimEx1 = EplRead.GetInt32(6); TempEpl.EplLim1 = EplRead.GetInt32(7); TempEpl.EplSir1 = EplRead.GetInt32(8); TempEpl.Premium1 = EplRead.GetInt32(9); TempEpl.Wage1 = EplRead.GetInt32(10); TempEpl.Sublim1 = EplRead.GetInt32(11); TempProdList.Add(TempEpl); }

    Read the article

  • Spring constructor injection error

    - by Jeune
    I am getting the following error for a bean in my application context: Related cause: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'businessLogicContext' d efined in class path resource [activemq-jms-consumer.xml]: Unsatisfied dependency expressed through constructor argument with index 0 of type [java.lang.String]: Could not convert constructor argument value of type [java.util.ArrayList] to required type [java.lang.String]: Failed to convert value of type [java.util.ArrayList] to required type [java.lang.Stri ng]; nested exception is java.lang.IllegalArgumentException: Cannot convert value of type [java.util.ArrayList] to requi red type [java.lang.String]: no matching editors or conversion strategy found at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:53 4) at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:18 6) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAuto wireCapableBeanFactory.java:855) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutow ireCapableBeanFactory.java:765) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCap ableBeanFactory.java:412) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBea nFactory.java:383) at java.security.AccessController.doPrivileged(Native Method) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapab leBeanFactory.java:353) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:245) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegis try.java:169) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:242) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListable BeanFactory.java:400) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplic ationContext.java:736) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:369) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java :123) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java :66) Here is my bean: <bean id="businessLogicContext" class="org.springframework.context.support.ClassPathXmlApplicationContext" depends-on="resolveProperty"> <constructor-arg index="0"> <list> <value>jms-applicationContext.xml</value> <value>jms-managerBeanContext.xml</value> <value>jms-daoContext.xml</value> <value>jms-serviceContext.xml</value> </list> </constructor-arg> </bean> I don't know what's wrong, I have googled how to inject a string array via constructor injection and the way I do it above seems okay.

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • UDP Tracker not responding

    - by kelton52
    Alright, so I'm trying to connect to UDP trackers using c#, but I never get a response. I also don't get any errors. Here's my code. namespace UDPTester { class MainClass { public static bool messageReceived = false; public static Random Random = new Random(); public static void LOG(string format, params object[] args) { Console.WriteLine (format,args); } public static void Main (string[] args) { LOG ("Creating Packet..."); byte[] packet; using(var stream = new MemoryStream()) { var bc = new MiscUtil.Conversion.BigEndianBitConverter(); using(var br = new MiscUtil.IO.EndianBinaryWriter(bc,stream)) { LOG ("Magic Num: {0}",(Int64)0x41727101980); br.Write (0x41727101980); br.Write((Int32)0); br.Write ((Int32)Random.Next()); packet = stream.ToArray(); LOG ("Packet Size: {0}",packet.Length); } } LOG ("Connecting to tracker..."); var client = new System.Net.Sockets.UdpClient("tracker.openbittorrent.com",80); UdpState s = new UdpState(); s.e = client.Client.RemoteEndPoint; s.u = client; StartReceiving(s); LOG ("Sending Packet..."); client.Send(packet,packet.Length); while(!messageReceived) { Thread.Sleep(1000); } LOG ("Ended"); } public static void StartReceiving(UdpState state) { state.u.BeginReceive(ReceiveCallback,state); } public static void ReceiveCallback(IAsyncResult ar) { UdpClient u = (UdpClient)((UdpState)(ar.AsyncState)).u; IPEndPoint e = (IPEndPoint)((UdpState)(ar.AsyncState)).e; Byte[] receiveBytes = u.EndReceive(ar, ref e); string receiveString = Encoding.ASCII.GetString(receiveBytes); LOG("Received: {0}", receiveString); messageReceived = true; StartReceiving((UdpState)ar.AsyncState); } } public class UdpState { public UdpClient u; public EndPoint e; } } I was using a normal BinaryWriter, but that didn't work, and I read somewhere that it wants it's data in BigEndian. This doesn't work for any of the UDP trackers I've found, any ideas why I'm not getting a response? Did they maybe change the protocol and not tell anyone? HTTP trackers all work fine. Trackers I've tried udp://tracker.publicbt.com:80 udp://tracker.ccc.de:80 udp://tracker.istole.it:80 Also, I'm not interested in using MonoTorrent(and when I was using it, the UDP didn't work anyways). Protocol Sources http://xbtt.sourceforge.net/udp_tracker_protocol.html http://www.rasterbar.com/products/libtorrent/udp_tracker_protocol.html

    Read the article

  • Modify audio pitch of recorded clip (m4v)

    - by devcube
    I'm writing an app in which I'm trying to change the pitch of the audio when I'm recording a movie (.m4v). Or by modifying the audio pitch of the movie afterwards. I want the end result to be a movie (.m4v) that has the original length (i.e. same visual as original) but with modified sound pitch, e.g. a "chipmunk voice". A realtime conversion is to prefer if possible. I've read alot about changing audio pitch in iOS but most examples focus on playback, i.e. playing the sound with a different pitch. In my app I'm recording a movie (.m4v / AVFileTypeQuickTimeMovie) and saving it using standard AVAssetWriter. When saving the movie I have access to the following elements where I've tried to manipulate the audio (e.g. modify the pitch): audio buffer (CMSampleBufferRef) audio input writer (AVAssetWriterAudioInput) audio input writer options (e.g. AVNumberOfChannelsKey, AVSampleRateKey, AVChannelLayoutKey) asset writer (AVAssetWriter) I've tried to hook into the above objects to modify the audio pitch, but without success. I've also tried with Dirac as described here: Real Time Pitch Change In iPhone Using Dirac And OpenAL with AL_PITCH as described here: Piping output from OpenAL into a buffer And the "BASS" library from un4seen: Change Pitch/Tempo In Realtime I haven't found success with any of the above libs, most likely because I don't really know how to use them, and where to hook them into the audio saving code. There seems to be alot of librarys that have similar effects but focuses on playback or custom recording code. I want to manipulate the audio stream I've already got (AVAssetWriterAudioInput) or modify the saved movie clip (.m4v). I want the video to be unmodifed visually, i.e. played at the same speed. But I want the audio to go faster (like a chipmunk) or slower (like a ... monster? :)). Do you have any suggestions how I can modify the pitch in either real time (when recording the movie) or afterwards by converting the entire movie (.m4v file)? Should I look further into Dirac, OpenAL, SoundTouch, BASS or some other library? I want to be able to share the movie to others with modified audio, that's the reason I can't rely on modifying the pitch for playback only. Any help is appreciated, thanks!

    Read the article

  • c++ compile error

    - by Niranjan
    Hi, I am trying to develop abstract design pattern code for one of my project as below.. But, I am not able to compile the code ..giving some compile errors(like "conversion from 'ProductA1 *' to 'ProductA *' exists, but is inaccessible" ).. Can any one please help me out in this... #include "stdafx.h" #include <iostream> using namespace std; class ProductA { public: virtual void Operation1()=0; virtual void Operation2()=0; }; class ProductA1 : ProductA { public: virtual void Operation1() {cout<<"PD ProductA1 Operation1"<<endl; } virtual void Operation2() {cout<<"PD ProductA1 Operation2"<<endl; } }; class ProductA2 : ProductA { public: virtual void Operation1() {cout<<"DT ProductA2 Operation1"<<endl; } virtual void Operation2() {cout<<"DT ProductA2 Operation2"<<endl; } }; //------------------------------------------------------------- class ProductB { public: virtual void Operation3()=0; virtual void Operation4()=0; }; class ProductB1 : ProductB { public: void Operation3() { cout<<"PD ProductB1 Operation3"<<endl; } void Operation4() { cout<<"PD ProductB1 Operation4"<<endl; } }; class ProductB2 : ProductB { public: void Operation3() { cout<<"DT ProductB2 Operation3"<<endl; } void Operation4() { cout<<"DT ProductB2 Operation4"<<endl; } }; //--------------- abstrct factory --------------------------- class Factory { public: virtual ProductA* CreateA () =0; virtual ProductB* CreateB ()=0; }; class Factory1 : Factory { public: ProductA* CreateA () { return new ProductA1(); } ProductB* CreateB () { return new ProductB1(); } }; class Factory2 : Factory { public: ProductA* CreateA () { return new ProductA2(); } ProductB* CreateB () { return new ProductB2(); } }; //--------------------- client -------------------------------- int _tmain(int argc, _TCHAR* argv[]) { Factory* pf = new Factory1(); ProductA *pa = pf->CreateA(); pa->Operation1(); pa->Operation2(); ProductB *pb = pf->CreateB(); pb->Operation3(); pb->Operation4(); return 0; }

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >