Search Results

Search found 1668 results on 67 pages for 'legacy'.

Page 62/67 | < Previous Page | 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Overwhelmed by design patterns... where to begin?

    - by Pete
    I am writing a simple prototype code to demonstrate & profile I/O schemes (HDF4, HDF5, HDF5 using parallel IO, NetCDF, etc.) for a physics code. Since focus is on IO, the rest of the program is very simple: class Grid { public: floatArray x,y,z; }; class MyModel { public: MyModel(const int &nip1, const int &njp1, const int &nkp1, const int &numProcs); Grid grid; map<string, floatArray> plasmaVariables; }; Where floatArray is a simple class that lets me define arbitrary dimensioned arrays and do mathematical operations on them (i.e. x+y is point-wise addition). Of course, I could use better encapsulation (write accessors/setters, etc.), but that's not the concept I'm struggling with. For the I/O routines, I am envisioning applying simple inheritance: Abstract I/O class defines read & write functions to fill in the "myModel" object HDF4 derived class HDF5 HDF5 using parallel IO NetCDF etc... The code should read data in any of these formats, then write out to any of these formats. In the past, I would add an AbstractIO member to myModel and create/destroy this object depending on which I/O scheme I want. In this way, I could do something like: myModelObj.ioObj->read('input.hdf') myModelObj.ioObj->write('output.hdf') I have a bit of OOP experience but very little on the Design Patterns front, so I recently acquired the Gang of Four book "Design Patterns: Elements of Reusable Object-Oriented Software". OOP designers: Which pattern(s) would you recommend I use to integrate I/O with the myModel object? I am interested in answering this for two reasons: To learn more about design patterns in general Apply what I learn to help refactor an large old crufty/legacy physics code to be more human-readable & extensible. I am leaning towards applying the Decerator pattern to myModel, so I can attach the I/O responsibilities dynamically to myModel (i.e. whether to use HDF4, HDF5, etc.). However, I don't feel very confident that this is the best pattern to apply. Reading the Gang of Four book cover-to-cover before I start coding feels like a good way to develop an unhealthy caffeine addiction. What patterns do you recommend?

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Mocking methods that call other methods Still hit database.Can I avoid it?

    - by devnet247
    Hi, It has been decided to write some unit tests using moq etc..It's lots of legacy code c# (this is beyond my control so cannot answer the whys of this) Now how do you cope with a scenario when you dont want to hit the database but you indirectly still hit the database? This is something I put together it's not the real code but gives you an idea. How would you deal with this sort of scenario? Basically calling a method on a mocked interface still makes a dal call as inside that method there are other methods not part of that interface?Hope it's clear [TestFixture] public class Can_Test_this_legacy_code { [Test] public void Should_be_able_to_mock_login() { var mock = new Mock<ILoginDal>(); User user; var userName = "Jo"; var password = "password"; mock.Setup(x => x.login(It.IsAny<string>(), It.IsAny<string>(),out user)); var bizLogin = new BizLogin(mock.Object); bizLogin.Login(userName, password, out user); } } public class BizLogin { private readonly ILoginDal _login; public BizLogin(ILoginDal login) { _login = login; } public void Login(string userName, string password, out User user) { //Even if I dont want to this will call the DAL!!!!! var bizPermission = new BizPermission(); var permissionList = bizPermission.GetPermissions(userName); //Method I am actually testing _login.login(userName,password,out user); } } public class BizPermission { public List<Permission>GetPermissions(string userName) { var dal=new PermissionDal(); var permissionlist= dal.GetPermissions(userName); return permissionlist; } } public class PermissionDal { public List<Permission> GetPermissions(string userName) { //I SHOULD NOT BE GETTING HERE!!!!!! return new List<Permission>(); } } public interface ILoginDal { void login(string userName, string password,out User user); } public interface IOtherStuffDal { List<Permission> GetPermissions(); } public class Permission { public int Id { get; set; } public string Name { get; set; } } Any suggestions? Am I missing the obvious? Is this Untestable code? Very very grateful for any suggestions.

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • Maven Ant BuildException with maven-antrun-plugin ... unable to find javac compiler

    - by robsbobs
    Im trying to make Maven call an ANT build for some legacy code. the ant build builds correctly through ant however when i call it using the maven ant plugin it fails with the following error: [ ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project CoreServices: An Ant BuildException has occured: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:158: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:62: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:33: The following error occurred while executing this line: [ERROR] C:\dev\projects\ods\build.xml:41: Unable to find a javac compiler; [ERROR] com.sun.tools.javac.Main is not on the classpath. [ERROR] Perhaps JAVA_HOME does not point to the JDK. [ERROR] It is currently set to "C:\bea\jdk150_11\jre" My javac exists at C:\bea\jdk150_11\bin and this works for all other things. Im not sure where Maven is getting this version of JAVA_HOME. JAVA_HOME in windows envionrmental variables is set to C:\bea\jdk150_11\ as it should be. The Maven code that im using to call the build.xml is <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>install</phase> <configuration> <target> <ant antfile="../build/build.xml" target="deliver" > </ant> </target> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>

    Read the article

  • Methodology for a Rails app

    - by Aaron Vegh
    I'm undertaking a rather large conversion from a legacy database-driven Windows app to a Rails app. Because of the large number of forms and database tables involved, I want to make sure I've got the right methodology before getting too far. My chief concern is minimizing the amount of code I have to write. There are many models that interact together, and I want to make sure I'm using them correctly. Here's a simplified set of models: class Patient < ActiveRecord::Base has_many :PatientAddresses has_many :PatientFileStatuses end class PatientAddress < ActiveRecord::Base belongs_to :Patient end class PatientFileStatus < ActiveRecord::Base belongs_to :Patient end The controller determines if there's a Patient selected; everything else is based on that. In the view, I will be needing data from each of these models. But it seems like I have to write an instance variable in my controller for every attribute that I want to use. So I start writing code like this: @patient = Patient.find(session[:patient]) @patient_addresses = @patient.PatientAddresses @patient_file_statuses = @patient.PatientFileStatuses @enrollment_received_when = @patient_file_statuses[0].EnrollmentReceivedWhen @consent_received = @patient_file_statuses[0].ConsentReceived @consent_received_when = @patient_file_statuses[0].ConsentReceivedWhen The first three lines grab the Patient model and its relations. The next three lines are examples of my providing values to the view from one of those relations. The view has a combination of text fields and select fields to show the data above. For example: <%= select("patientfilestatus", "ConsentReceived", {"val1"="val1", "val2"="val2", "Written"="Written"}, :include_blank=true )% <%= calendar_date_select_tag "patient_file_statuses[EnrollmentReceivedWhen]", @enrollment_complete_when, :popup=:force % (BTW, the select tag isn't really working; I think I have to use collection_select?) My questions are: Do I have to manually declare the value of every instance variable in the controller, or can/should I do it within the view? What is the proper technique for displaying a select tag for data that's not the primary model? When I go to save changes to this form, will I have to manually pick out the attributes for each model and save them individually? Or is there a way to name the fields such that ActiveRecord does the right thing? Thanks in advance, Aaron.

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • Wierdness debugging Visual Studio C++ 2008

    - by Jeff Dege
    I have a legacy C++ app, that in its most incarnation we've been building with makefiles and VS2003's command-line tool. I'm trying to get it to build using VS2008 and MsBuild. The build is working OK, but I'm getting errors where I'd never seen errors, before, and stepping through in VS2008's debugger only confuses me. The app links a number of static libraries, which fall into two categories: those that are part of the same application suite, and those that are shared between a number of application suites. Originally, I had a .csproj file for each static library, and two .sln files, one for the application suite (including the suite-specific libraries) and one for the non-suite-specific shared libraries. The shared libraries were included in the link, their projects were not included in the application suite .sln. The application instantiates an object from a class that is defined in one of the shared libraries. The class has a member object of a class that wraps a linked list. The constructor of the linked list class sets its "head" pointer to null. When I run the app, and try to add an element to the linked list, I get an error - the head pointer contains the value 0xCCCCCCCC. So I step through with the debugger. And see weirdness. When the current line in the debugger is in a source file belonging to the static library, the head pointer contains 0x00000000. When I step into the constructor, I can see the pointer being set to that value, and when I'm stepped into any other method of the class, I can see that the head pointer still contains 0x00000000. But when I step out into methods that are defined in the application suite .sln, it contains 0xCCCCCCCC. It's not like it's being overwritten. It changes back and forth depending upon which source file I am currently debugging. So I included the shared library's project in the application suite .sln, and now I see the head pointer containing 0xCCCCCCCC all the time. It looks like the constructor of the linked list class is not being called. So now, I'm entirely confused. Anyone have any ideas?

    Read the article

  • Dynamic Dispatch without Virtual Functions

    - by Kristopher Johnson
    I've got some legacy code that, instead of virtual functions, uses a kind field to do dynamic dispatch. It looks something like this: // Base struct shared by all subtypes // Plain-old data; can't use virtual functions struct POD { int kind; int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); }; enum Kind { Kind_Derived1, Kind_Derived2, Kind_Derived3 }; struct Derived1: POD { Derived1(): kind(Kind_Derived1) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived2: POD { Derived2(): kind(Kind_Derived2) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; struct Derived3: POD { Derived3(): kind(Kind_Derived3) {} int GetFoo(); int GetBar(); int GetBaz(); int GetXyzzy(); // plus other type-specific data and function members }; and then the POD class's function members are implemented like this: int POD::GetFoo() { // Call kind-specific function switch (kind) { case Kind_Derived1: { Derived1 *pDerived1 = static_cast<Derived1*>(this); return pDerived1->GetFoo(); } case Kind_Derived2: { Derived2 *pDerived2 = static_cast<Derived2*>(this); return pDerived2->GetFoo(); } case Kind_Derived3: { Derived3 *pDerived3 = static_cast<Derived3*>(this); return pDerived3->GetFoo(); } default: throw UnknownKindException(kind, "GetFoo"); } } POD::GetBar(), POD::GetBaz(), POD::GetXyzzy(), and other members are implemented similarly. This example is simplified. The actual code has about a dozen different subtypes of POD, and a couple dozen methods. New subtypes of POD and new methods are added pretty frequently, and so every time we do that, we have to update all these switch statements. The typical way to handle this would be to declare the function members virtual in the POD class, but we can't do that because the objects reside in shared memory. There is a lot of code that depends on these structs being plain-old-data, so even if I could figure out some way to have virtual functions in shared-memory objects, I wouldn't want to do that. So, I'm looking for suggestions as to the best way to clean this up so that all the knowledge of how to call the subtype methods is centralized in one place, rather than scattered among a couple dozen switch statements in a couple dozen functions. What occurs to me is that I can create some sort of adapter class that wraps a POD and uses templates to minimize the redundancy. But before I start down that path, I'd like to know how others have dealt with this.

    Read the article

  • How do I properly register the Type Library of A VB.NET COM+ Component?

    - by k_Dank
    I am looking to upgrade legacy VB6 COM+ components to VB.NET components. I have seemingly upgraded one already, called EventPackage, which has one class, IEventListener. Another, TradeOrders, Implements EventPackage.IEventListener. When attempting to build TradeOrders, I get the following Errors/Warnings; Cannot load type library for reference "EventPackage". Library not registered. (Exception from HRESULT: 0x8002801D (TYPE_E_LIBNOTREGISTERED)) The referenced component 'EventPackage' could not be found. Type 'EventPackage.IEventListener' is not defined. In the .vbproj, I notice this reference <COMReference Include="EventPackage"> <Guid>{0D76C094-21A6-4E04-802B-6E539F7102D7}</Guid> <Lcid>0</Lcid> <VersionMajor>2</VersionMajor> <VersionMinor>0</VersionMinor> <WrapperTool>tlbimp</WrapperTool> </COMReference> When I search the registry for this Guid, I find nothing. When using GUIDs for similar COM+ objects, I find them in HKEY_CLASSES_ROOT\CLSID\{...}\TypeLib ("..." being the GUID of the other component). When I go to the registry key name corresponding to EventPackage.IEventListener, I find that there is no \TypeLib subkey. As you might suspect, searching the reg for "0D76C094-21A6-4E04-802B-6E539F7102D7" yields no results. So I know this must be a registry problem, But I have tried seemingly every google result I have found. I have tried Regasm and regsvcs .exe's to no avail. Many pages just tell me that dragging the dll to the COM+ manager should automatically register the component. So how do I register the Type library? Details on how I made EventPackage COM+ component Ran the VB6-VB.NET wizard Then I added some lines to the assemblyinfo.vb file added Imports System.EnterpriseServices added Imports System.EnterpriseServices Imports System.Data.SqlClient <Assembly: CLSCompliant(True)> <Assembly: AssemblyKeyFileAttribute("...")> for a strong name <Assembly: Guid("...")> (Where "..." is the COM+ CLSID of the old component) I added the following to the class file IEventListener.VB Imports System.EnterpriseServices <ComClass("...")> _ (Where ... is the proper COM+ CLSID, that is the only argument) Inherits ServicedComponent changed the ID made by the Conversion wizard to the proper value (from <System.Runtime.InteropServices.ProgId("IEventListener_NET.IEventListener)> to <System.Runtime.InteropServices.ProgId("EventPackage.IEventListener")> _ Then I dragged the DLL into the COM+ manager in the proper COM+ application (although, the "Path" is not specified and only says mscoree.dll)

    Read the article

  • Global NSMutableArray doesn't seem to be holding values

    - by diatrevolo
    I have a Cocos2D iPhone application that requires a set of CGRects overlaid on an image to detect touches within them. "Data" below is a class that holds values parsed from an XML file. "delegateEntries" is a NSMutableArray that contains several "data" objects, pulled from another NSMutableArray called "entries" that lives in the application delegate. For some strange reason, I can get at these values without problems in the init function, but further down the class in question, I try to get at these values, and the application crashes without an error message (as an example, I put in the "ccTouchBegan" method which accessess this data through the "populateFieldsForTouchedItem" method. Can anyone see why these values would not be accessible from other methods? No objects get released until dealloc. Thanks in advance! @synthesize clicked, delegate, data, image, blurImage, normalImage, arrayOfRects, delegateEntries; - (id)initWithTexture:(CCTexture2D *)aTexture { if( (self=[super initWithTexture:aTexture] )) { arrayOfRects = [[NSMutableArray alloc] init]; delegateEntries = [[NSMutableArray alloc] init]; delegate = (InteractivePIAppDelegate *)[[UIApplication sharedApplication] delegate]; delegateEntries = [delegate entries]; data = [delegateEntries objectAtIndex:0]; NSLog(@"Assigning %@", [[delegateEntries objectAtIndex:0] backgroundImage]); NSLog(@"%@ is the string", [[data sections] objectAtIndex:0]); //CGRect rect; NSLog(@"Count of array is %i", [delegateEntries count]); //collect as many items as there are XML entries for(int i=0; i<[delegateEntries count]; i++) { if([[delegateEntries objectAtIndex:i] xPos]) { NSLog(@"Found %i items", i+1); [arrayOfRects addObject:[NSValue valueWithCGRect:CGRectMake([[[delegateEntries objectAtIndex:i] xPos] floatValue], [[[delegateEntries objectAtIndex:i] yPos] floatValue], [[[delegateEntries objectAtIndex:i] xBounds] floatValue], [[[delegateEntries objectAtIndex:i] yBounds] floatValue])]]; } else { NSLog(@"Nothing"); } } //remove the following once the NSMutableArray from above works (legacy) blurImage = [[NSString alloc] initWithString:[data backgroundBlur]]; NSLog(@"5"); normalImage = [[NSString alloc] initWithString:[data backgroundImage]]; clicked = NO; } return self; } And then: - (void)populateFieldsForTouchedItem:(TouchedRect)touchInfo { Data *touchDatum = [[Data alloc] init]; touchDatum = [[self delegateEntries] objectAtIndex:touchInfo.recordNumber]; NSLog(@"Assigning %@", [[[self delegateEntries] objectAtIndex:touchInfo.recordNumber] backgroundImage]); rect = [[arrayOfRects objectAtIndex:touchInfo.recordNumber] CGRectValue]; image = [[NSString alloc] initWithString:[[touchDatum sections] objectAtIndex:0]]; [touchDatum release]; } - (BOOL)ccTouchBegan:(UITouch *)touch withEvent:(UIEvent *)event { TouchedRect touchInfo = [self containsTouchLocation:touch]; NSLog(@"Information pertains to %i", touchInfo.recordNumber); if ( !touchInfo.touched && !clicked ) { //needed since the touch location changes when zoomed NSLog(@"NOPE"); return NO; } [self populateFieldsForTouchedItem:touchInfo]; NSLog(@"YEP"); return YES; }

    Read the article

  • Mixing C and C++, raw pointers and (boost) shared pointers

    - by oompahloompah
    I am working in C++ with some legacy C code. I have a data structure that (during initialisation), makes a copy of the structure pointed to a ptr passed to its initialisation pointer. Here is a simplification of what I am trying to do - hopefully, no important detail has been lost in the "simplification": /* C code */ typedef struct MyData { double * elems; unsigned int len; }; int NEW_mydata(MyData* data, unsigned int len) { // no error checking data->elems = (double *)calloc(len, sizeof(double)); return 0; } typedef struct Foo { MyData data data_; }; void InitFoo(Foo * foo, const MyData * the_data) { //alloc mem etc ... then assign the STRUCTURE foo.data_ = *thedata ; } C++ code ------------- typedef boost::shared_ptr<MyData> MyDataPtr; typedef std::map<std::string, MyDataPtr> Datamap; class FooWrapper { public: FooWrapper(const std::string& key) { MyDataPtr mdp = dmap[key]; InitFoo(&m_foo, const_cast<MyData*>((*mdp.get()))); } ~FooWrapper(); double get_element(unsigned int index ) const { return m_foo.elems[index]; } private: // non copyable, non-assignable FooWrapper(const FooWrapper&); FooWrapper& operator= (const FooWrapper&); Foo m_foo; }; int main(int argc, char *argv[]) { MyData data1, data2; Datamap dmap; NEW_mydata(&data1, 10); data1->elems[0] = static_cast<double>(22/7); NEW_mydata(&data2, 42); data2->elems[0] = static_cast<double>(13/21); boost::shared_ptr d1(&data1), d2(&data2); dmap["data1"] = d1; dmap["data2"] = d2; FooWrapper fw("data1"); //expect 22/7, get something else (random number?) double ret fw.get_element(0); } Essentially, what I want to know is this: Is there any reason why the data retrieved from the map is different from the one stored in the map?

    Read the article

  • Mobile Safari text selection after input field focus

    - by Andy
    I have some legacy html that needs to work on old and new browsers (down to IE7) as well as the iPad. The iPad is the biggest issues because of how text selection is handled. On a page there is a textarea as well as some text instructions. The instructions are in a <div> and the user needs to be able to select the instructions. The problem is that once focus is placed in the textarea, the user cannot subsequently select the text instructions in the <div>. This is because the text cannot receive focus. According to the Safari Web Content Guide: Handling Events (especially, "Making Elements Clickable"), you can add a onclick event handler to the div you want to receive focus. This solution works (although it is not ideal) in iOS 6x but it does not work in iOS 5x. Does anyone have a suggestion that minimizes changes to our existing code and produces consistant user interaction. Here is sample code that shows the problem. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <script src="http://code.jquery.com/jquery-1.8.2.js"></script> <!-- Resources: Safari Web Content Guide: Handling Events (especially, "Making Elements Clickable") http://developer.apple.com/library/safari/#documentation/appleapplications/reference/safariwebcontent/HandlingEvents/HandlingEvents.html#//apple_ref/doc/uid/TP40006511-SW1 --> </head> <body> <div style="width:550"> <div onclick='function() { void(0); }'> <p>On the iPad, I want to be able to select this text after the textarea has had focus.</p> </div> <textarea rows="20" cols="80">After focus is in the textarea, can you select the text above?</textarea> </div> </body> </html>

    Read the article

  • VS 2010 IDE 2GB limt

    - by user561732
    I am using VS 2010 on a win 7 64 bit system with 8 GB of memory. My application is 32 bit. While in the VS 2010 .Net IDE, the app shows up in the Windows task manager as "MyApp.vshost.exe *32" while the VS IDE itself shows up as "devenv.exe *32". I checked and it appears that the VS 2010 IDE file (devenv.exe) is complied with the /LargeAddressAware flag. However, when debugging large models, the IDE fails with an Out of memory exception. In the Windows Task manager, the "MyApp.vshost.exe *32" process indicates about 1400 MB of memory usage (while the "devenv.exe *32" process is well under 500 MB). Is it possible to set the "MyApp.vshost.exe *32" process to be /LargeAddressAware in order to avoid this out of memory situation? If so, how can this be done in the IDE. While setting the final application binary to be /LargeAddressAware would work, I still need to be able to debug the app in the IDE with these type of large models. I should also note that my app has a deep object hierarchy with many collections that together required a lot of memory. However, my issue is not related to trying to create say 1 large array that requires greater then 2 GB of memory etc. I should note that I am able to run the same app in the VB6 IDE and not get an out of memory situation as long as the VB6 IDE is made /LargeAddressAware. In the case of VB6, the IDE and the app being debugged are part of the same process (and not split into 2 as is the case with VS 2010.) The VB6 process can be larger then 3 GB without running into out of memory issues. Ultimately, my objective is to have my app run completely in 64 bit to access more memory. I am hoping that in such cases, the IDE will allow the debugging process to exceed 2 GB without crashing (and certainly more then 1.4 GB as is the current case). However, for now, while 95% of my app is 64 bit, I am calling a legacy COM 32 bit DLL and as such, my entire app is forced to still run in 32 bit mode until I replace that DLL.

    Read the article

  • Hudson on debian lenny

    - by Laurent
    Hello, I installed Hudson deamon on one server (running on debian lenny testing) some time ago. All was working until I perform an upgrade. At this time Hudson isn't accessible at port 8080 (which is the default port used). I have looked for iptables problems, however port 8080 is open in INPUT and OUTPUT. Configuration file in /etc/default/hudson seems okay, I haven't touch it. And if I do a ps aux | grep hudson, hudson deamon is running. Update 1: What is really strange for me is that in /var/log/hudson/hudson.log I get no error : [Winstone 2010/02/10 17:10:04] - Control thread shutdown successfully [Winstone 2010/02/10 17:10:04] - Winstone shutdown successfully Running from: /usr/share/hudson/hudson.war [Winstone 2010/02/10 17:10:43] - Beginning extraction from war file hudson home directory: /var/lib/hudson [Winstone 2010/02/10 17:10:44] - HTTP Listener started: port=8080 [Winstone 2010/02/10 17:10:44] - AJP13 Listener started: port=8009 [Winstone 2010/02/10 17:10:44] - Winstone Servlet Engine v0.9.10 running: controlPort=disabled 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started initialization 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Listed all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Prepared all plugins 10 févr. 2010 17:10:44 hudson.model.Hudson$4 onAttained INFO: Started all plugins 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Loaded all jobs 10 févr. 2010 17:10:46 hudson.model.Hudson$4 onAttained INFO: Completed initialization 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@caa559d: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@caa559d]: org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@40d2f5f1: defining beans [daoAuthenticationProvider,authenticationManager,userDetailsService]; root of factory hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext@4d88a387: display name [Root WebApplicationContext]; startup date [Wed Feb 10 17:10:47 CET 2010]; root of context hierarchy 10 févr. 2010 17:10:47 org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext@4d88a387]: org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0 10 févr. 2010 17:10:47 org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@6153e0c0: defining beans [filter,legacy]; root of factory hierarchy 10 févr. 2010 17:10:47 hudson.TcpSlaveAgentListener <init> INFO: JNLP slave agent listener started on TCP port 59750 Update 2: What I get with lsof -i -n -P | grep hudson: java 28985 hudson 97u IPv6 2002707 0t0 TCP *:8080 (LISTEN) java 28985 hudson 99u IPv6 2002708 0t0 TCP *:8009 (LISTEN) java 28985 hudson 147u IPv6 2002711 0t0 TCP *:59750 (LISTEN) java 28985 hudson 150u IPv6 2002712 0t0 UDP *:33848 I don't know what I can verify. Does someone has an idea in order to help me to resolve this problem ?

    Read the article

  • stunnel crashing

    - by Jay
    I'm trying to use stunnel to secure a legacy application's communications. I can't seem to get it setup and working. Can anyone provide any hints where I'm going wrong? Here's what I'm trying to accomplish: A windows service on a client machine connects to a server on port 7000 using TCP. I'd like to encrypt the communication between client and server. Here's what I've tried: Created a new server that accepts ssl connections on port 7443. Got a certificate for the server and installed it. That seems to work with my test setup. Installed stunnel on my windows machine (version 7.43 from the distribution archive file). Installed libssl32.dll and libeay32.dll in the same directory as stunnel.exe ( from the openssl-0.9.8h-1 binary distribution). Installed it as a service using "stunnel -install" Configured stunnel as follows: debug=7 output=C:\p4\internal\Utility\Proxy\proxy.log service=Proxy taskbar=no [exchange] accept=7000 client=yes connect=proxy.blah.com:7443 I changed my hosts file to trick the old application into connecting through stunnel: server.blah.com 127.0.0.1 # when client looks up server it goes to stunnel proxy.blah.com IP-address-of-server.blah.com # stunnel connects to new server "server.blah.com" now resolves to the machine it's running on (i.e. stunnel). "proxy.blah.com" goes to the real server. stunnel should connect to the server. I start the stunnel service and try to connect. It looks like it's working but the stunnel service just shuts down with no message. 2010.04.19 13:16:21 LOG5[4924:3716]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:16:21 LOG5[4924:3716]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange accepted connection from 127.0.0.1:4134 2010.04.19 13:16:49 LOG6[4924:3748]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:16:49 LOG5[4924:3748]: Service exchange connected remote server from x.253.120.19:4135 2010.04.19 13:20:24 LOG5[3668:3856]: Reading configuration from file stunnel.conf 2010.04.19 13:20:24 LOG7[3668:3856]: Snagged 64 random bytes from C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: Wrote 1024 new random bytes to C:/.rnd 2010.04.19 13:20:24 LOG7[3668:3856]: RAND_status claims sufficient entropy for the PRNG 2010.04.19 13:20:24 LOG7[3668:3856]: PRNG seeded successfully 2010.04.19 13:20:24 LOG7[3668:3856]: SSL context initialized for service exchange 2010.04.19 13:20:24 LOG5[3668:3856]: Configuration successful 2010.04.19 13:20:24 LOG5[3668:3856]: No limit detected for the number of clients 2010.04.19 13:20:24 LOG7[3668:3856]: FD=312 in non-blocking mode 2010.04.19 13:20:24 LOG7[3668:3856]: Option SO_REUSEADDR set on accept socket 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange bound to 0.0.0.0:7000 2010.04.19 13:20:24 LOG7[3668:3856]: Service exchange opened FD=312 2010.04.19 13:20:24 LOG5[3668:3856]: stunnel 4.33 on x86-pc-mingw32-gnu with OpenSSL 0.9.8h 28 May 2008 2010.04.19 13:20:24 LOG5[3668:3856]: Threading:WIN32 SSL:ENGINE Sockets:SELECT,IPv6 2010.04.19 13:21:02 LOG7[3668:4556]: Service exchange accepted FD=372 from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:4556]: Creating a new thread 2010.04.19 13:21:02 LOG7[3668:4556]: New thread created 2010.04.19 13:21:02 LOG7[3668:3756]: Service exchange started 2010.04.19 13:21:02 LOG7[3668:3756]: FD=372 in non-blocking mode 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange accepted connection from 127.0.0.1:4156 2010.04.19 13:21:02 LOG7[3668:3756]: FD=396 in non-blocking mode 2010.04.19 13:21:02 LOG6[3668:3756]: connect_blocking: connecting x.80.60.32:7443 2010.04.19 13:21:02 LOG7[3668:3756]: connect_blocking: s_poll_wait x.80.60.32:7443: waiting 10 seconds 2010.04.19 13:21:02 LOG5[3668:3756]: connect_blocking: connected x.80.60.32:7443 2010.04.19 13:21:02 LOG5[3668:3756]: Service exchange connected remote server from x.253.120.19:4157 2010.04.19 13:21:02 LOG7[3668:3756]: Remote FD=396 initialized 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): before/connect initialization 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server hello A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server certificate A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read server done A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write client key exchange A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write change cipher spec A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 write finished A 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 flush data 2010.04.19 13:21:02 LOG7[3668:3756]: SSL state (connect): SSLv3 read finished A The client thinks the connection is closed: No connection could be made because the target machine actively refused it 127.0.0.1:7000 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.Sockets.Socket.Connect(EndPoint remoteEP) at Service.ConnUtility.Connect() Any suggestions?

    Read the article

  • Ubuntu: Move fsbackup backups to Amazon S3

    - by Alexander Gladysh
    I have a legacy server (Ubuntu 9.10 Karmic x86), where previous admin set up backups with fsbackup. This server lives in a VPS (under some kind of Xen), and it is low on HDD space (16 GB total). Now it came to a point, where fsbackup backups take more space than the rest of data in the system. The filesystem is 100% filled, and I already cleaned up all that I could, aside from actual backups. I do not have any experience managing fsbackup, and I do not want to break or lose the backups. Googling fsbackup gives surprisingly low quality results... Here is how my backups look like: $ sudo ls -lh /var/archives total 8.1G -rw-rw---- 1 root root 318 2011-01-06 06:26 myserver-20110106.md5 -rw-rw---- 1 root root 258 2011-01-07 06:26 myserver-20110107.md5 -rw-rw---- 1 root root 318 2011-01-08 06:26 myserver-20110108.md5 -rw-rw---- 1 root root 318 2011-01-09 06:26 myserver-20110109.md5 -rw-rw---- 1 root root 346 2011-01-10 06:43 myserver-20110110.md5 -rw-rw---- 1 root root 14M 2011-01-06 06:26 myserver-all-mysql-databases.20110106.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-07 06:26 myserver-all-mysql-databases.20110107.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-08 06:26 myserver-all-mysql-databases.20110108.sql.bz2 -rw-rw---- 1 root root 14M 2011-01-09 06:26 myserver-all-mysql-databases.20110109.sql.bz2 -rw-rw---- 1 root root 862 2011-01-10 06:43 myserver-all-mysql-databases.20110110.sql.bz2 -rw-rw---- 1 root root 827K 2011-01-03 06:25 myserver-etc.20110103.master.tar.gz -rw-rw---- 1 root root 16K 2011-01-06 06:25 myserver-etc.20110106.tar.gz -rw-rw---- 1 root root 16K 2011-01-07 06:25 myserver-etc.20110107.tar.gz -rw-rw---- 1 root root 16K 2011-01-08 06:25 myserver-etc.20110108.tar.gz -rw-rw---- 1 root root 16K 2011-01-09 06:25 myserver-etc.20110109.tar.gz -rw-rw---- 1 root root 827K 2011-01-10 06:25 myserver-etc.20110110.master.tar.gz -rw------- 1 root root 36K 2011-01-10 06:25 myserver-etc.incremental.bin -rw-rw---- 1 root root 29M 2011-01-03 06:25 myserver-home.20110103.master.tar.gz -rw-rw---- 1 root root 11K 2011-01-06 06:25 myserver-home.20110106.tar.gz -rw-rw---- 1 root root 14K 2011-01-07 06:25 myserver-home.20110107.tar.gz -rw-rw---- 1 root root 11K 2011-01-08 06:25 myserver-home.20110108.tar.gz -rw-rw---- 1 root root 11K 2011-01-09 06:25 myserver-home.20110109.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-10 06:25 myserver-home.20110110.master.tar.gz -rw------- 1 root root 27K 2011-01-10 06:25 myserver-home.incremental.bin -rw-rw---- 1 root root 1.5G 2011-01-03 06:29 myserver-opt.20110103.master.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-06 06:25 myserver-opt.20110106.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-07 06:25 myserver-opt.20110107.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-08 06:25 myserver-opt.20110108.tar.gz -rw-rw---- 1 root root 1.5M 2011-01-09 06:25 myserver-opt.20110109.tar.gz -rw-rw---- 1 root root 1.5G 2011-01-10 06:30 myserver-opt.20110110.master.tar.gz -rw------- 1 root root 201K 2011-01-10 06:30 myserver-opt.incremental.bin -rw-rw---- 1 root root 2.3G 2011-01-03 06:41 myserver-srv.20110103.master.tar.gz -rw-rw---- 1 root root 44M 2011-01-06 06:26 myserver-srv.20110106.tar.gz -rw-rw---- 1 root root 27M 2011-01-07 06:25 myserver-srv.20110107.tar.gz -rw-rw---- 1 root root 39M 2011-01-08 06:26 myserver-srv.20110108.tar.gz -rw-rw---- 1 root root 2.0M 2011-01-09 06:25 myserver-srv.20110109.tar.gz -rw-rw---- 1 root root 2.7G 2011-01-10 06:42 myserver-srv.20110110.master.tar.gz -rw------- 1 root root 3.4M 2011-01-10 06:42 myserver-srv.incremental.bin I'm thinking about moving backups to Amazon S3, but before that I have to free some space, so the server can work. Perhaps I can mount /var/archives to an Amazon S3 bucket somehow... Any advice?

    Read the article

  • RAID and Partitions, guidance Needed

    - by beauregarde
    Alright I have a Biostar TA790GX3A2+ Mobo 2x Seagate 750Gb Hard drive (with 2 different speeds) an X4 9750 A GeForce 9800GT and 2GB RAM Hardware Specs link text I want to configure my computer with partitions in various RAID arrays. The Partitions I know i want (disk letters are mostly for reference here) C: XP Boot D: XP Swap E: XP Run F: Games G: Data The Partitions I think I want (repeat caveat) H: small FAT for Win Legacy and DOS I: Linux J: Linux Swap K-?M?: Other Linux /whatever partitions N & O: Attic for D1 and D2 What I'd like to do, is have C: written on Disk 1 (D1),.. D: on D2,.. E: and F: striped on D1 & D2,.. G: mirrored or D1 & D2,.. I: on D2 (so i can just switch disc boot priority to open in Ubuntu),.. J: on D1,.. and H: somewhere low on D1 I am inexperienced with VMs, so i am unsure as to whether those run out of XP, or whether i need to reserve a primary partition for them. However, I think they would be preferable for testing new OS's to scheduling a partition for the same purpose. I'm also not married to XP, but -64 IS pretty important to me. QUestion Time 1) Ignoring the irrationality of it all, is such a configuration possible? If not, can some pseudo-approximation be achieved? 2) My RAID is software, isnt it? 3) How much should I short a 750GB HD? And should i use that space for my attics, or for my attics and something else, or for something else (.iso's perhaps?)? 4) if XP is striped on D1 & D2, will that interfere egregiously with my Swap writes on D2? If so, would striping both XP and Swap relieve (or at least mitigate) that issue? Should XP and Swap just be written normally on 2 different HDs? 5) Should I keep DL's and Drivers on E: (XP Run), F: (Games), or elsewhere? 6) Is 4GB enough for C:? 7) Is 30GB enough (or too much) for E:? 8) How much to reserve for the Linux and sub-Linux partitions? Also, where on the platter do you think i should put them? 9) Am I a fool to use FAT16 instead of FAT32 for H: because I'd rather run 95 than 98SE? If not, do you think 2GB or 4GB? 10) I cant predict what my Max Commit Charge will be, so recommendations for Pagefile size? 5GB? 12GB? 11) VMs, where do I run them? do they exacerbate anything? Would it be better to just emulate Linux, 95, and DOS? EC) What havent I considered that I really should? Notes: computer is mostly for playing games and watching media, though I wouldnt rule out the use of particularly blah-intensive anything.

    Read the article

  • How to troubleshoot problem with OpenVpn Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenSvn Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

  • COPSSH RSA only authentication connection problem

    - by Siriss
    Hello all- I am trying to setup an RSA Authentication only SSH/SFTP server. The SSH will be used primarily for RDC. Everything works just fine if I use password authentication. I am using Putty Key Generator to create he keys and I have pasted the key into authorized_keys file and restarted the OpenSSH server. I am using FileZilla to test the SFTP connection as that is the most important. For my tests I have created the keys without password correction. It will not work with a standard SSH connection either. It says "Server refused our key". I have recreated the key twice double checking with a guide on google, and I am pretty sure I did it correctly. I load the key file into FileZilla under settings/SFTP and try to connect and I get the following error: Disconnected: No supported authentication methods available. I have been playing with the different settings all night and I cannot figure it out. Here is my sshd_config file: # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. #Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m PermitRootLogin no #StrictModes yes #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! PasswordAuthentication no PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM no #AllowAgentForwarding yes #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /bin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs # X11Forwarding no # AllowTcpForwarding no # ForceCommand cvs server Thank you so much for your help!

    Read the article

  • Supermicro IPMI on MBD-X8DAH+-F-O motherboard. Keyboard and mouse do not work after booting Windows Server 2008 R2

    - by LDelgado
    Hell Everyone, I built a server with the mentioned motherboard. I installed Windows Server 2008 R2 Enterprise on this server. IPMI is integrated on the motherboard with its own dedicated NIC. I've got that NIC configured with its own IP address. I can remote into it using IPMI, and I can remotely control the server settings before booting the OS ( BIOS, RAID configuration, etc). When the OS boots, I lose the mouse and keyboard. I cannot use the keyboard or mouse when installing the OS either. So the Keyboard and Mouse only work when no OS is loaded. Once the OS loads I lose it - that is my problem. I've been doing some research and trying a few things, but I have not been successful in fixing this issue. I may be wrong, but based on the things I've found online, it seems that the problem could be caused by the way the OS handles USB. The server is headless. There is no keyboard, mouse, or monitor plugged into it. When I boot up the OS and remote into it, I cannot see a mouse or keyboard listed in the Device Manager. Based on what I've read, it seems that the OS should detect a mouse and a keyboard when connecting remotely via IPMI. The following are the solutions I've tried. Nothing has worked so far: I've updated the firmware of the IPMI component to the latest firmware - 1.33. I made sure that the mouse mode was set to Absolute (Windows OS). I've loaded the factory defaults several times. I've enabled Port64h/60h Emulation under the USB settings in the BIOS. I've disabled USB legacy support in the BIOS. I made sure the firewall wasn't blocking IPMI (disabled the firewall). And that's about it. I've found threads in some forums from people having the same issue as me, but they were not running the same OS. They were either running Linux or FreeBSD. Most of them fixed their problem by selecting the right mouse mode (Linux in their case). There was one other that solved the problem by disabling USB Mass Storage mode. He stated "When I set it to disable USB Mass Storage when no image is loaded, the ukbd came alive, and I'm typing this on the IPMI Console. " source: http://freebsd.1045724.n5.nabble.com/IPMI-Console-No-luck-once-OS-is-booted-td3967868.html I suspect the solution described in the previous paragraph is somehow related to my problem. I've found several threads on the internet with issues describing the same problem, but none of them were with Windows Server 2008 R2. Again, I may be wrong, but it seems like that could be the issue. I just don't know how I go about applying a solution in Windows Server 2008 R2. In any case, I could use your expertise. Maybe I am missing something, or maybe I'm on the right track. Your help is much appreciated. Thank you in advance,

    Read the article

  • How to troubleshoot problem with OpenVPN Appliance Server not able to connect

    - by Peter
    1) I have a Windows Server 2008 Standard SP2 2) I am running Hyper-V and have the OpenVPN Appliance Server virtual running 3) I have configured it as it said, only issue was that the legacy network adapter does not have a setting the instructions mention "Enable spoofing of MAC Addresses". My understand is that before R2, this was on by default. 4) Server is running, web interfaces look good 5) I am trying to connect from a Vista 64 box and cannot 5a) If I set to UPD I am stuck at Authorizing and client log looks like: 10/11/09 15:00:42: INFO: OvpnConfig: connect... 10/11/09 15:00:42: INFO: Gui listen socket at 34567 10/11/09 15:00:42: INFO: sending start command to instantiator... 10/11/09 15:00:42: INFO: start 34567 ?C:\Users\Peter\AppData\Roaming\OpenVPNTech\config?02369512D0C82A04B88093022DA0226202218022A902264022AE022B? 10/11/09 15:00:42: INFO: Got line from MI->>INFO:OpenVPN Management Interface Version 1 -- type 'help' for more info 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 5b) I setup server for tcp and try to connect, I get a loop of authorizing and reconnecting. Log looks like: 10/11/09 15:00:42: INFO: Got line from MI->>HOLD:Waiting for hold release 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: real-time state notification set to ON 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: bytecount interval changed 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold flag set to OFF 10/11/09 15:00:43: INFO: Got line from MI->SUCCESS: hold release succeeded 10/11/09 15:00:43: INFO: Got line from MI->>PASSWORD:Need 'Auth' username/password 10/11/09 15:00:43: INFO: Processing PASSWORD. 10/11/09 15:00:43: INFO: OvpnClient: setting need auth to true. 10/11/09 15:00:43: INFO: OvpnConfig: Setting need auth to true. 10/11/09 15:00:43: INFO: Got auth request from active_config from 0 10/11/09 15:00:47: INFO: Sending Credentials.... 10/11/09 15:00:47: INFO: Sending 25 bytes for username. 10/11/09 15:00:47: INFO: Sent 25 bytes for username. 10/11/09 15:00:47: INFO: Sending 30 bytes for password. 10/11/09 15:00:47: INFO: Sent 30 bytes for password. 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' username entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->SUCCESS: 'Auth' password entered, but not yet verified 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287647,WAIT,,, 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:0,42 10/11/09 15:00:48: INFO: Got line from MI->>BYTECOUNT:54,42 10/11/09 15:00:48: INFO: Got line from MI->>STATE:1255287648,AUTH,,, 10/11/09 15:00:50: INFO: Got line from MI->>BYTECOUNT:2560,2868 10/11/09 15:00:52: INFO: Got line from MI->>BYTECOUNT:2560,3378 10/11/09 15:00:54: INFO: Got line from MI->>BYTECOUNT:2560,3888 ... Is there anyway to turn on robust logging on the server to understand what is happening? Any ideas on how to hunt this down?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67  | Next Page >