Search Results

Search found 1682 results on 68 pages for 'tron legacy'.

Page 57/68 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Is this a safe/valid hash method implementation?

    - by Sean
    I have a set of classes to represent some objects loaded from a database. There are a couple variations of these objects, so I have a common base class and two subclasses to represent the differences. One of the key fields they have in common is an id field. Unfortunately, the id of an object is not unique across all variations, but within a single variation. What I mean is, a single object of type A could have an id between, say, 0 and 1,000,000. An object of type B could have an id between, 25,000 and 1,025,000. This means there's some overlap of id numbers. The objects are just variations of the same kind of thing, though, so I want to think of them as such in my code. (They were assigned ids from different sets for legacy reasons.) So I have classes like this: @class BaseClass @class TypeAClass : BaseClass @class TypeBClass : BaseClass BaseClass has a method (NSNumber *)objectId. However instances of TypeA and TypeB could have overlapping ids as discussed above, so when it comes to equality and putting these into sets, I cannot just use the id alone to check it. The unique key of these instances is, essentially, (class + objectId). So I figured that I could do this by making the following hash function on the BaseClass: -(NSUInteger)hash { return (NSUInteger)[self class] ^ [self.objectId hash]; } I also implemented isEqual like so: - (BOOL)isEqual:(id)object { return (self == object) || ([object class] == [self class] && [self.objectId isEqual:[object objectId]]); } This seems to be working, but I guess I'm just asking here to make sure I'm not overlooking something - especially with the generation of the hash by using the class pointer in that way. Is this safe or is there a better way to do this?

    Read the article

  • Analyze log files from many languages using a single tool. And recommendations of logging frameworks

    - by Binary255
    We have a system build on lots of languages. The ones we are interested in logging, in order of priority, are: C/C++ PHP C# Bash Java Wish list: If it is possible, we would like logging to be achieved from the above languages in such a way that we may use a single log viewing tool for all of them. Ideally they would be in the same format, but next to that in as few formats as possible and readable from as many log file viewers as possible. If it is possible logging to a single log file or a set of log files would be nice. With a possibility to filter based on the source language that is being logged. We would like to copy the log files (or should be log to a database and copy it instead?) from multiple servers to a single location. So that we can analyze the log files from many servers at the same time (to see if any of our servers execute a certain piece legacy code for example). Being able to change logging level at runtime would be nice. Thank you for reading! It's quite a complex problem, I hope someone has wrestled with it before and has some valuable information!

    Read the article

  • Section or group name 'cachingConfiguration' is already defined - but where?

    - by Richard Ev
    On Windows XP I am working on a .NET 3.5 web app that's a combination of WebForms and MVC2 (The WebForms parts are legacy, and being migrated to MVC). When I run this from VS2008 using the ASP.NET web server everything works as expected. However, when I host the app in IIS and try to use it, I see the following error Section or group name 'cachingConfiguration' is already defined. Updates to this may only occur at the configuration level where it is defined. Source Error: Line 24: </sectionGroup> Line 25: <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/> Line 26: <section name="cachingConfiguration" type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings,Microsoft.Practices.EnterpriseLibrary.Caching, Version=4.1.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" /> Line 27: </configSections> Line 28: Sure enough, if I remove the offending line (line 26 in the error message) from my web.config then the app runs correctly. However, I really need to find out where the duplicate definition of this is. It's nowhere in my solution. Where else could it be?

    Read the article

  • SSIS - Skip Missing Files

    - by Greg
    I have a SSIS 2008 package that calls about 10 other SSIS packages (legacy issues, don't ask). Each of those child packages loads a specific file into a table. But sometimes one or more of these input files will be missing. How can I let a child package fail (because a file is missing) but let the rest of the parent package keep on running? I've tried increasing the maximum error count on the parent package, the tasks in the parent package that call each child, and in the child package itself. None of that seemed to make any difference. I still get this error when I run it with a file missing: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (2) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. Edit: failpackageonfailure and faulparentonfailure are already all set to false everywhere.

    Read the article

  • Is Stream.Write thread-safe?

    - by Mike Spross
    I'm working on a client/server library for a legacy RPC implementation and was running into issues where the client would sometimes hang when waiting to a receive a response message to an RPC request message. It turns out the real problem was in my message framing code (I wasn't handling message boundaries correctly when reading data off the underlying NetworkStream), but it also made me suspicious of the code I was using to send data across the network, specifically in the case where the RPC server sends a large amount of data to a client as the result of a client RPC request. My send code uses a BinaryWriter to write a complete "message" to the underlying NetworkStream. The RPC protocol also implements a heartbeat algorithm, where the RPC server sends out PING messages every 15 seconds. The pings are sent out by a separate thread, so, at least in theory, a ping can be sent while the server is in the middle of streaming a large response back to a client. Suppose I have a Send method as follows, where stream is a NetworkStream: public void Send(Message message) { //Write the message to a temporary stream so we can send it all-at-once MemoryStream tempStream = new MemoryStream(); message.WriteToStream(tempStream); //Write the serialized message to the stream. //The BinaryWriter is a little redundant in this //simplified example, but here because //the production code uses it. byte[] data = tempStream.ToArray(); BinaryWriter bw = new BinaryWriter(stream); bw.Write(data, 0, data.Length); bw.Flush(); } So the question I have is, is the call to bw.Write (and by implication the call to the underlying Stream's Write method) atomic? That is, if a lengthy Write is still in progress on the sending thread, and the heartbeat thread kicks in and sends a PING message, will that thread block until the original Write call finishes, or do I have to add explicit synchronization to the Send method to prevent the two Send calls from clobbering the stream?

    Read the article

  • Need help with strange Class#getResource() issue

    - by Andreas_D
    I have some legacy code that reads a configuration file from an existing jar, like: URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = url.openStream(); Obviously it worked before but when I execute this code, the URL is valid but I get an IOException on the third line, saying it can't find the file. The url is something like "file:jar:c:/path/to/jar/somejar.jar!configuration.properties" so it doesn't look like a classpath issue - java knows pretty well where the file can be found.. The above code is part of an ant task and it fails while the task is executed. Strange enough - I copied the code and the jar file into a separate class and it works as expected, the properties file is readable. At some point I changed the code of the ant task to URL url = SomeClass.class.getResource("/configuration.properties"); // some more code here using url variable InputStream in = SomeClass.class.getResourceAsStream("/configuration.properties"); and now it works - just until it crashes in another class where a similiar access pattern is implemented.. Why could it have worked before, why does it fail now? The only difference I see at the moment is, that the old build was done with java 1.4 while I'm trying it with Java 6 now. Workaround Today I installed Java 1.4.2_19 on the build server and made ant to use it. To my totally frustrating surprise: The problem is gone. It looks to me, that java 1.4.2 can handle URLs of this type while Java 1.6 can't (at least in my context/environment). I'm still hoping for an explanation although I'm facing the work to rewrite parts of the code to use Class#getRessourceAsStream which behaved much more stable...

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Using SetParent to steal the main window of another process but keeping the message loops separate

    - by insta
    Background: My coworker and I are maintaining a million-line legacy application we inherited. Its frontend is written in VB6, and as we're devoting almost all of our resources to converting it to C#, we are looking for quick & dirty solutions to our specific problem. The application behaves in a plugin-ish manner. There are up to 20ish separate ActiveX controls that can be loaded at once in a grid-style layout. The problem is that the ActiveX controls do all of their processing on their own UI thread, and as a lot of it is blocking waiting on network access, the UI gets very soupy. When our hosting C# app loads these controls, it becomes unresponsive because of how many controls are chewing up UI resources doing nothing. To top it off, the controls are fragile and will crash at the slightest provocation. When they are hosted in the main C# app, it creates serious instability. The best my coworker and I have come up with so far is starting a process per ActiveX control. This process, which we call the proxy, is another winforms app. It uses named pipes to communicate with the hosting process. The hosting process creates a window, loads an ActiveX control of our choice (via some reflections & AxHost magic), and tells the main process what its window handle is via the named pipe. The main process uses a combination of SetParent, and SetWindowPos to move the proxy application into itself to emulate a plugin. Size updates are sent via the named pipe. This works well enough until the ActiveX application does some sort of lengthy process and we click around on the main window while it's working. For awhile the main window is responsive, but eventually it becomes unresponsive as the child window waits for its UI thread. How can we keep the child windows on their own complete thread while still getting the benefits of SetParent? (please let me know if anything isn't clear!)

    Read the article

  • Signable, streamable, "readable" archive format?

    - by alexvoda
    Is there any archive format that offers the following: be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file) be stream-able - be able to be opened even if not all of the content has been transfered (also not strictly linearly) be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want) be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike licence so it doesn't allow comercial use, BSD on the other hand alows it) Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded. I am not interested in: being able to be compressed being supported on legacy systems Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ? If there is, are there programing libraries available for the above mentioned OSs? If not, would it be hard to create such a thing? EDIT: clarrified piont 5 EDIT 2: added a note to clarify point 1 and 2 P.S.: This is my first question on StackOverflow

    Read the article

  • Advice on using .Net WorkFlow State Machine. What would you do?

    - by jlafay
    So I've been tasked at work to write windows services to replace some old legacy VB6 WinForms apps currently running as services, consistently repeating tasks day-to-day. To give some general background, they have there own state machines built in to handle decision basing and not utilizing threading. A lot of the senior developers here thought it would be worth a try to look into WorkFlow to replace the state machines rather than write my own business logic and try threading it programmaticly. So it's WF vs. the "Old College Try" I suppose. My concern is that there aren't many books on the topic, and since it was implemented in .Net I've heard very little about it being used. I brought this up at work and another developer mentioned that it's because Biz Talk never really caught on and it was designed for that. So is it broken? Do you think it will be supported long enough to not worry so much? I don't want an ill-functioning process injected into my services, my new babies at work, and then have WF's keel over. Leaving me with having to replace them with my own code in the event of an emergency; which does not seem like much of a grand scenario to me. Any suggestions, recommendations would be super.

    Read the article

  • FluentNHibernate mapping of composite foreign keys

    - by Faron
    I have an existing database schema and wish to replace the custom data access code with Fluent.NHibernate. The database schema cannot be changed since it already exists in a shipping product. And it is preferable if the domain objects did not change or only changed minimally. I am having trouble mapping one unusual schema construct illustrated with the following table structure: CREATE TABLE [Container] ( [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container] PRIMARY KEY ( [ContainerId] ASC ) ) CREATE TABLE [Item] ( [ItemId] [uniqueidentifier] NOT NULL, [ContainerId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC ) ) CREATE TABLE [Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Item_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [ItemId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Item_Property] PRIMARY KEY ( [ContainerId] ASC, [ItemId] ASC, [PropertyId] ASC ) ) CREATE TABLE [Container_Property] ( [ContainerId] [uniqueidentifier] NOT NULL, [PropertyId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_Container_Property] PRIMARY KEY ( [ContainerId] ASC, [PropertyId] ASC ) ) The existing domain model has the following class structure: The Property class contains other members representing the property's name and value. The ContainerProperty and ItemProperty classes have no additional members. They exist only to identify the owner of the Property. The Container and Item classes have methods that return collections of ContainerProperty and ItemProperty respectively. Additionally, the Container class has a method that returns a collection of all of the Property objects in the object graph. My best guess is that this was either a convenience method or a legacy method that was never removed. The business logic mainly works with Item (as the aggregate root) and only works with a Container when adding or removing Items. I have tried several techniques for mapping this but none work so I won't include them here unless someone asks for them. How would you map this?

    Read the article

  • How to include many "sub"-queries in a SQL statement to generate file paths for images?

    - by Zachary
    Greetings, I have three fields in legacy MySQL database/application. image_type image_of_bush image_prefix I need to extract the data into a full image file path, into a .CSV file, where each combination (mentioned below) is a column. Can it all be done in SQL? Or can you recommend a better way? Currently using PHP to display the combinations on the product page. this is also part of a larger query, which is extracting data from an OS Commerce mySQL database. CASE ONE One Horizontal Image image_type = "Horizontal Image" image_of_bush = "No Image of Bush" IMAGE NAME: image_prefix + _s + .jpg (Example: Albertine_s.jpg) CASE TWO One Vertical Image image_type = "Vertical Image" image_of_bush = "No Image of Bush" IMAGE NAME: image_prefix + _v + .jpg (Example: Albertine_v.jpg) CASE THREE Two Horizontal Images image_type = "Horizontal Image" image_of_bush = "Horizontal Image of Bush" FIRST IMAGE NAME: image_prefix + _s + .jpg (Example: Albertine_s.jpg) SECOND IMAGE NAME: image_prefix + _bs + .jpg (Example: Albertine_bs.jpg) CASE FOUR Two Vertical Images image_type = "Vertical Image" image_of_bush = "Vertical Image of Bush" FIRST IMAGE NAME: image_prefix + _v + .jpg (Example: Albertine_v.jpg) SECOND IMAGE NAME: image_prefix + _bv + .jpg (Example: Albertine_bv.jpg) CASE FOUR One Horizontal and One Vertical Image image_type = "Horizontal Image" image_of_bush = "Vertical Image of Bush" FIRST IMAGE NAME: image_prefix + _s + .jpg (Example: Albertine_s.jpg) SECOND IMAGE NAME: image_prefix + _bv + .jpg (Example: Albertine_bv.jpg) CASE FIVE One Vertical and One Horizontal Image image_type = "Vertical Image" image_of_bush = "Horizontal Image of Bush" FIRST IMAGE NAME: image_prefix + _v + .jpg (Example: Albertine_v.jpg) SECOND IMAGE NAME: image_prefix + _bs + .jpg (Example: Albertine_bs.jpg)

    Read the article

  • Utility of List<T>.Sort() versus List<T>.OrderBy() for a member of a custom container class

    - by ccomet
    I've found myself running back through some old 3.5 framework legacy code, and found some points where there are a whole bunch of lists and dictionaries that must be updated in a synchronized fashion. I've determined that I can make this process infinitely easier to both utilize and understand by converging these into custom container classes of new custom classes. There are some points, however, where I came to concerns with organizing the contents of these new container classes by a specific inner property. For example, sorting by the ID number property of one class. As the container classes are primarily based around a generic List object, my first instinct was to write the inner classes with IComparable, and write the CompareTo method that compares the properties. This way, I can just call items.Sort() when I want to invoke the sorting. However, I've been thinking instead about using items = items.OrderBy(Func) instead. This way it is more flexible if I need to sort by any other property. Readability is better as well, since the property used for sorting will be listed in-line with the sort call rather than having to look up the IComparable code. The overall implementation feels cleaner as a result. I don't care for premature or micro optimization, but I like consistency. I find it best to stick with one kind of implementation for as many cases as it is appropriate, and use different implementations where it is necessary. Is it worth it to convert my code to use the LINQ OrderBy instead of using List.Sort? Is it a better practice to stick with the IComparable implementation for these custom containers? Are there any significant mechanical advantages offered by either path that I should be weighing the decision on? Or is their end-functionality equivalent to the point that it just becomes coder's preference?

    Read the article

  • charsets in MySQL replication

    - by niklassaers
    Hi guys, What can I do to ensure that replication will use latin1 instead of utf-8? I'm migrating between an MySQL 5.1.22 server (master) on a Linux system and a MySQL 5.1.42 server (slave) on a FreeBSD system. My replication works well, but when non-ascii characters are in my varchars, they turn "weird". The Linux/MySQL-5.1.22 shows the following character set variables: character_set_client=latin1 character_set_connection=latin1 character_set_database=latin1 character_set_filesystem=binary character_set_results=latin1 character_set_server=latin1 character_set_system=utf8 character_sets_dir=/usr/share/mysql/charsets/ collation_connection=latin1_swedish_ci collation_database=latin1_swedish_ci collation_server=latin1_swedish_ci While the FreeBSD shows character_set_client=utf8 character_set_connection=utf8 character_set_database=utf8 character_set_filesystem=binary character_set_results=utf8 character_set_server=utf8 character_set_system=utf8 character_sets_dir=/usr/local/share/mysql/charsets/ collation_connection=utf8_general_ci collation_database=utf8_general_ci collation_server=utf8_general_ci Setting any of these variables from the MySQL CLI has no effect, and setting them in my.cnf or at the command line makes the server not start. Of course, both servers have the tables in question created the same way, in this case with DEFAULT CHARSET=latin1. Let me give you an example: CREATE TABLE `test` ( `test` varchar(5) DEFAULT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 When I on the master do, in a Latin1 terminal, "INSERT INTO test VALUES ('æøå')", this becomes on the slave, when I select it from a Latin1 based terminal +--------+ | test | +--------+ | æøå | +--------+ On a UTF-8 based terminal on the replication slave, test contains: +--------+ | test | +--------+ | æøå | +--------+ So my conclusion is that it is converted to utf8, even though the table definition is latin1. Is this a correct conclusion? Of course, on the master, in a latin1 terminal, it still says: +------+ | test | +------+ | æøå | +------+ Since both system character sets are utf-8, if I set both terminals to utf-8 and do again "INSERT INTO test VALUES ('æøå')" on the master with a utf-8 terminal, on the slave with utf-8 I get: +------------+ | test | +------------+ | æøà | +------------+ If my conclusion is correct, all my replicated data is converted to utf8 (if it is utf8, it is treated as latin1 and converted to utf8), while all the old data in the table is, as the CREATE TABLE suggests, latin1. I'd love to convert it all to utf-8 if it weren't for the fact that legacy applications rely on it being latin1, so I need to keep it in latin1 while they still exist. What can I do to ensure that the replication reads latin1, treats it as latin1 and writes it on the slave as latin1? Cheers Nik

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • Question about Reporting and Data Warehousing Software bundled with SQL Server 2005

    - by anonymous user
    We currently use SQL Server 2005 Enterprise for our fairly large application, that has its roots in pre SQL Server 7.0. The tables are normalized and designed mainly for the application. The developers for the most part have the legacy SQL Server mindset. Only using the part of TSQL that existed back in 7.0, not using any of the new features of tsql or that are bundled with 2005. We're currently trying to build on demand reports using some crappy third party software, and will eventually try to build a data warehouse using more of the same crappy third party software (name removed to protect the guilty, don't ask I will not tell). The rationale for this was that we didn't want to spend more money to buy this additional software from Microsoft (this was not my decision, I had no input, but is my problem now). But from what I can tell is that Enterprise includes all of these tools, or am I missing something? What comes bundled with SQL Server 2005 Enterprise as far as reporting and data warehousing? Will we need to purchase anything else? is there actually anything else that can be purchased from Microsoft in this regard?

    Read the article

  • Which plugin framework to use for native C++/Win32

    - by Kerido
    Hi everybody. I have an extensible product that allows 3rd party developers to extend it. The aspects that can be extended are documented and interfaces are provided in the SDK. Currently, I'm using COM and I'm getting pretty comfortable with it. I especially like the ability to provide interface versioning in a unified manner. I consider it to be a requirement because you never know what you're gonna need in the future. Just to be precise, here's an example. Let's suppose I have an interface representing a particular feature: class IFeature { public: virtual void DoFeatureTask() = 0; }; Then after the interface is already documented (and someone may have used it in the plugin code) I'm realizing, I need more from this feature. Maybe, there is an option I need to provide. I just define the second version: class IFeature2 { public: virtual void DoFeatureTask(int theOption) = 0; }; I don't mean I intend to have lots of versions. But it just may happen. In COM, because every interface is associated with a GUID, I can query a preferred implementation, determine its presence, and, finally, fall back to a legacy one. But after glancing through C++/COM-related questions, I noticed many recommendations against COM. So maybe it's not the best choice and I'm just too old-school. Can you advise on an alternative?

    Read the article

  • How to add second JOIN clause in Linq To Sql?

    - by Refracted Paladin
    I am having a lot of trouble coming up with the Linq equivalent of this legacy stored procedure. The biggest hurdle is it doesn't seem to want to let me add a second 'clause' on the join with tblAddress. I am getting a Cannot resolve method... error. Can anyone point out what I am doing wrong? Below is, first, the SPROC I need to convert and, second, my LINQ attempt so far; which is FULL OF FAIL! Thanks SELECT dbo.tblPersonInsuranceCoverage.PersonInsuranceCoverageID, dbo.tblPersonInsuranceCoverage.EffectiveDate, dbo.tblPersonInsuranceCoverage.ExpirationDate, dbo.tblPersonInsuranceCoverage.Priority, dbo.tblAdminInsuranceCompanyType.TypeName AS CoverageCategory, dbo.tblBusiness.BusinessName, dbo.tblAdminInsuranceType.TypeName AS TypeName, CASE WHEN dbo.tblAddress.AddressLine1 IS NULL THEN '' ELSE dbo.tblAddress.AddressLine1 END + ' ' + CASE WHEN dbo.tblAddress.CityName IS NULL THEN '' ELSE '<BR>' + dbo.tblAddress.CityName END + ' ' + CASE WHEN dbo.tblAddress.StateID IS NULL THEN '' WHEN dbo.tblAddress.StateID = 'ns' THEN '' ELSE dbo.tblAddress.StateID END AS Address FROM dbo.tblPersonInsuranceCoverage LEFT OUTER JOIN dbo.tblInsuranceCompany ON dbo.tblPersonInsuranceCoverage.InsuranceCompanyID = dbo.tblInsuranceCompany.InsuranceCompanyID LEFT OUTER JOIN dbo.tblBusiness ON dbo.tblBusiness.BusinessID = dbo.tblInsuranceCompany.BusinessID LEFT OUTER JOIN dbo.tblAddress ON dbo.tblAddress.BusinessID = dbo.tblBusiness.BusinessID and tblAddress.AddressTypeID = 'b' LEFT OUTER JOIN dbo.tblAdminInsuranceCompanyType ON dbo.tblPersonInsuranceCoverage.InsuranceCompanyTypeID = dbo.tblAdminInsuranceCompanyType.InsuranceCompanyTypeID LEFT OUTER JOIN dbo.tblAdminInsuranceType ON dbo.tblPersonInsuranceCoverage.InsuranceTypeID = dbo.tblAdminInsuranceType.InsuranceTypeID WHERE tblPersonInsuranceCoverage.PersonID = @PersonID var coverage = from insuranceCoverage in context.tblPersonInsuranceCoverages where insuranceCoverage.PersonID == personID join insuranceCompany in context.tblInsuranceCompanies on insuranceCoverage.InsuranceCompanyID equals insuranceCompany.InsuranceCompanyID join address in context.tblAddresses on insuranceCompany.tblBusiness.BusinessID equals address.BusinessID where address.AddressTypeID = 'b' select new { insuranceCoverage.PersonInsuranceCoverageID, insuranceCoverage.EffectiveDate, insuranceCoverage.ExpirationDate, insuranceCoverage.Priority, CoverageCategory = insuranceCompany.tblAdminInsuranceCompanyType.TypeName, insuranceCompany.tblBusiness.BusinessName, TypeName = insuranceCoverage.InsuranceTypeID, Address = };

    Read the article

  • Going "behind Hibernate's back" to update foreign key values without an associated entity

    - by Alex Cruise
    Updated: I wound up "solving" the problem by doing the opposite! I now have the entity reference field set as read-only (insertable=false updatable=false), and the foreign key field read-write. This means I need to take special care when saving new entities, but on querying, the entity properties get resolved for me. I have a bidirectional one-to-many association in my domain model, where I'm using JPA annotations and Hibernate as the persistence provider. It's pretty much your bog-standard parent/child configuration, with one difference being that I want to expose the parent's foreign key as a separate property of the child alongside the reference to a parent instance, like so: @Entity public class Child { @Id @GeneratedValue Long id; @Column(name="parent_id", insertable=false, updatable=false) private Long parentId; @ManyToOne(cascade=CascadeType.ALL) @JoinColumn(name="parent_id") private Parent parent; private long timestamp; } @Entity public class Parent { @Id @GeneratedValue Long id; @OrderBy("timestamp") @OneToMany(mappedBy="parent", cascade=CascadeType.ALL, fetch=FetchType.LAZY) private List<Child> children; } This works just fine most of the time, but there are many (legacy) cases when I'd like to put an invalid value in the parent_id column without having to create a bogus Parent first. Unfortunately, Hibernate won't save values assigned to the parentId field due to insertable=false, updatable=false, which it requires when the same column is mapped to multiple properties. Is there any nice way to "go behind Hibernate's back" and sneak values into that field without having to drop down to JDBC or implement an interceptor? Thanks!

    Read the article

  • response.redirect to classic asp failing

    - by jeff
    I have the following code pasted below. For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. Please help as it's driving me insane! It's so strange, it runs locally on my machine but won't run on my server! public void btnUploadTheFile_Click(object Source, EventArgs evArgs) { //need to check that the uploaded file is an xls file. string strFileNameOnServer = "PJI3.txt"; string strBaseLocation = ConfigurationSettings.AppSettings["str_file_location"]; if ("" == strFileNameOnServer) { txtOutput.InnerHtml = "Error - a file name must be specified."; return; } if (null != uplTheFile.PostedFile) { try { uplTheFile.PostedFile.SaveAs(strBaseLocation+strFileNameOnServer); txtOutput.InnerHtml = "File <b>" + strBaseLocation+strFileNameOnServer+"</b> uploaded successfully"; Response.Redirect ("/COBRA/pages/sap_import_pji3_prc.asp"); } catch (Exception e) { txtOutput.InnerHtml = "Error saving <b>" + strBaseLocation+strFileNameOnServer+"</b><br>"+ e.ToString(); } } }

    Read the article

  • JSLINT error: Move all 'var' declarations to the top of the function.

    - by Oleg Yaroshevych
    JSLINT site updated, and I cannot check JS scripts anymore. For me, this warning is not critical, and I don't want to go through thousands of lines to fix this, I want to find more critical problems. Does anybody know how to turn off this error, or use legacy JSLINT? UPDATE Example: function doSomethingWithNodes(nodes){ this.doSomething(); for (var i = 0; i < nodes.length; ++i){ this.doSomethingElse(nodes[i]); } doSomething(); // want to find this problem } jslint.com output: Error: Problem at line 4 character 8: Move all 'var' declarations to the top of the function. for (var i = 0; i < nodes.length; ++i){ Problem at line 4 character 8: Stopping, unable to continue. (44% scanned). Problem: Having variables on top of the functions is new requirement. I cannot use JSLINT to test code, because it stops scanning script on this error. I have a lot of code, and do not want to threat this warning as critical error.

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • Windows loader problem - turn on verbose mode

    - by doobop
    Hi, I'm in the process of reorganizing some of the legacy libraries in our application which has unmanaged code calling into libraries of managed code. While I have the code reorganized, it produces the following loader error: ... 'app.exe': Loaded 'C:\WINDOWS\system32\CsDisp.dll' 'app.exe': Loaded 'C:\WINDOWS\system32\psapi.dll' 'app.exe': Loaded 'C:\WINDOWS\system32\shell32.dll' 'app.exe': Loaded 'C:\appCode\Debug\daq206_32.dll', Binary was not built with debug information. 'app.exe': Loaded 'C:\appCode\Debug\SiUSBXp.dll', Binary was not built with debug information. 'app.exe': Loaded 'C:\appCode\Debug\AdlinkDAQ.dll', Symbols loaded. 'app.exe': Loaded 'C:\WINDOWS\system32\P9842.dll', Binary was not built with debug information. LDR: LdrRelocateImageWithBias() failed 0xc0000018 LDR: OldBase : 10000000 LDR: NewBase : 00A80000 LDR: Diff : 0x7c90d6fa0012f6cc LDR: NextOffset : 00000000 LDR: *NextOffset : 0x0 LDR: SizeOfBlock : 0xa80000 Debugger:: An unhandled non-continuable exception was thrown during process load I believe 0xc0000018 error is an overlapping address range. So, I have two questions. First, what linker options may cause this error? I'm currently linking with /DYNAMICBASE:NO and /FIXED:No as this was how some of the previous libraries were set up. Second, is there a way to turn on verbose mode for the loader so I can see what exactly it's trying to load? P9842 is a third party library so I imagine it is getting to one of my libraries after P9842 and failing on that one. Can I narrow it down? Thanks.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >