Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 268/319 | < Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >

  • How should a one-man development shop document their code?

    - by CKoenig
    Hi, please let me first describe my situation. I work in an IT department for a small-to-medium sized industrial-company and basically I'm the only real developer (sometimes a second guy joins in for his own projects). I programm mostly in C#/.net. Of course I only programm for internal need (Intranet, reporting, data-driven apps, some mobile apps, ...). My question is how should I document my work? It's a highly dynamic environment (the features and bug fixes I implement are tested by me during production, and go live, often within a day. If I technical documentation like MSDN or even overview diagramms those would take me more time to sync than the whole programming process. Also I feel it's a waste of time because I would be the only one who ever read it. I do understand that if I get sick, leave, or forget this documentation would be valuable. PS:well of course you are right - the quesion is how much and how/where. I try using the XML-docu comments for the public exposed parts but as I'm a believer in self-documenting code the comments mostly restates in plain text what you can read from the method-head itself :(Maybe using the remarks section is the key but if you have 30 lines of code with a 15 line xml-comment in front it just looks dirty (sorry for posting it here but our firewall rejects JSON :( )

    Read the article

  • Rails named_scope across multiple tables

    - by wakiki
    I'm trying to tidy up my code by using named_scopes in Rails 2.3.x but where I'm struggling with the has_many :through associations. I'm wondering if I'm putting the scopes in the wrong place... Here's some pseudo code below. The problem is that the :accepted named scope is replicated twice... I could of course call :accepted something different but these are the statuses on the table and it seems wrong to call them something different. Can anyone shed light on whether I'm doing the following correctly or not? I know Rails 3 is out but it's still in beta and it's a big project I'm doing so I can't use it in production yet. class Person < ActiveRecord::Base has_many :connections has_many :contacts, :through => :connections named_scope :accepted, :conditions => ["connections.status = ?", Connection::ACCEPTED] # the :accepted named_scope is duplicated named_scope :accepted, :conditions => ["memberships.status = ?", Membership::ACCEPTED] end class Group < ActiveRecord::Base has_many :memberships has_many :members, :through => :memberships end class Connection < ActiveRecord::Base belongs_to :person belongs_to :contact, :class_name => "Person", :foreign_key => "contact_id" end class Membership < ActiveRecord::Base belongs_to :person belongs_to :group end I'm trying to run something like person.contacts.accepted and group.members.accepted which are two different things. Shouldn't the named_scopes be in the Membership and Connection classes? One solution is to just call the two different named scope something different in the Person class or even to create separate associations (ie. has_many :accepted_members and has_many :accepted_contacts) but it seems hackish and in reality I have many more than just accepted (ie. banned members, ignored connections, pending, requested etc etc)

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • Cold Fusion blank page in IE7 on refresh?

    - by richardtallent
    I'm new to Cold Fusion, have a very basic problem that's really slowing me down. I'm making edits in a text editor and refreshing the page in web browsers for testing. Standard web dev stuff, no browser-sniffing, redirection, or other weirdness, and no proxies involved. When I refresh the page in Chrome or Firefox, everything works fine, but when I refresh in IE7, I get a blank page. View Source shows me: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> That's it. While I am rendering to the transitional DTD, the real head contains a title, etc. My development server is CF 9, production is 8. This problem has been happening in both. Seems it may only be happening on pages that are the the result of a POST action. I've never experienced this in ASP.NET (my usual development environment) using the same browsers.

    Read the article

  • mySQL Efficiency Issue - How to find the right balance of normalization...?

    - by Foo
    I'm fairly new to working with relational databases, but have read a few books and know the basics of good design. I'm facing a design decision, and I'm not sure how to continue. Here's a very over simplified version of what I'm building: People can rate photos 1-5, and I need to display the average votes on the picture while keeping track of the individual votes. For example, 12 people voted 1, 7 people voted 2, etc. etc. The normalization freak of me initially designed the table structure like this: Table pictures id* | picture | userID | Table ratings id* | pictureID | userID | rating With all the foreign key constraints and everything set as they shoudl be. Every time someone rates a picture, I just insert a new record into ratings and be done with it. To find the average rating of a picture, I'd just run something like this: SELECT AVG(rating) FROM ratings WHERE pictureID = '5' GROUP by pictureID Having it setup this way lets me run my fancy statistics to. I can easily find who rated a certain picture a 3, and what not. Now I'm thinking if there's a crapload of ratings (which is very possible in what I'm really designing), finding the average will became very expensive and painful. Using a non-normalized version would seem to be more efficient. e.g.: Table picture id | picture | userID | ratingOne | ratingTwo | ratingThree | ratingFour | ratingFive To calculate the average, I'd just have to select a single row. It seems so much more efficient, but so much more uglier. Can someone point me in the right direction of what to do? My initial research shows that I have to "find the right balance", but how do I go about finding that balance? Any articles or additional reading information would be appreciated as well. Thanks.

    Read the article

  • Updating nullability of columns in SQL 2008

    - by Shaul
    I have a very wide table, containing lots and lots of bit fields. These bit fields were originally set up as nullable. Now we've just made a decision that it doesn't make sense to have them nullable; the value is either Yes or No, default No. In other words, the schema should change from: create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit null, BitField2 bit null, ... BitFieldN bit null ) to create table MyTable( ID bigint not null, Name varchar(100) not null, BitField1 bit not null, BitField2 bit not null, ... BitFieldN bit not null ) alter table MyTable add constraint DF_BitField1 default 0 for BitField1 alter table MyTable add constraint DF_BitField2 default 0 for BitField2 alter table MyTable add constraint DF_BitField3 default 0 for BitField3 So I've just gone in through the SQL Management Studio, updating all these fields to non-nullable, default value 0. And guess what - when I try to update it, SQL Mgmt studio internally recreates the table and then tries to reinsert all the data into the new table... including the null values! Which of course generates an error, because it's explicitly trying to insert a null value into a non-nullable column. Aaargh! Obviously I could run N update statements of the form: update MyTable set BitField1 = 0 where BitField1 is null update MyTable set BitField2 = 0 where BitField2 is null but as I said before, there are n fields out there, and what's more, this change has to propagate out to several identical databases. Very painful to implement manually. Is there any way to make the table modification just ignore the null values and allow the default rule to kick in when you attempt to insert a null value?

    Read the article

  • Strange File Upload issue with asp.net site on a web farm

    - by Coov
    I have a basic asp.net file upload page. When I test file uploads from my local machine, it works fine. When I test file uploads from our dev machine, it works fine. When I deploy the site to our production webfarm, it behaves strangely. If I access the site from off the network, I can load file-after-file without issue. If I access the site from within our network, I can load the first file just fine but any subsequent files result it a bad sequence of commands error. I'm not sure if this is web farm issue, a network issue, or something else. It feels like a connection is not being disposed of properly but it doesn't make sense why everything works fine remotely. Markup: <asp:FileUpload ID="FileUpload1" runat="server" Width="350px" /> <asp:Button ID="btnSubmit" runat="server" Text="Upload" onclick="btnSubmit_Click" /> Code: if (FileUpload1.HasFile) { FtpWebRequest ftpRequest; FtpWebResponse ftpResponse; ftpRequest = (FtpWebRequest)FtpWebRequest.Create(new Uri("ftp://ftp.myftpsite.com/" + FileUpload1.FileName)); ftpRequest.Method = WebRequestMethods.Ftp.UploadFile; ftpRequest.Proxy = null; ftpRequest.UseBinary = true; ftpRequest.Credentials = new NetworkCredential("username", "password"); ftpRequest.KeepAlive = false; byte[] fileContents = new byte[FileUpload1.PostedFile.ContentLength]; using (Stream fr = FileUpload1.PostedFile.InputStream) { fr.Read(fileContents, 0, FileUpload1.PostedFile.ContentLength); } using (Stream writer = ftpRequest.GetRequestStream()) { writer.Write(fileContents, 0, fileContents.Length); } ftpResponse = (FtpWebResponse)ftpRequest.GetResponse(); Response.Write(ftpResponse.StatusDescription); }

    Read the article

  • ColdFusion blank page in IE7 on refresh?

    - by richardtallent
    I'm new to ColdFusion, have a very basic problem that's really slowing me down. I'm making edits in a text editor and refreshing the page in web browsers for testing. Standard web dev stuff, no browser-sniffing, redirection, or other weirdness, and no proxies involved. When I refresh the page in Chrome or Firefox, everything works fine, but when I refresh in IE7, I get a blank page. View Source shows me: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD> <META http-equiv=Content-Type content="text/html; charset=utf-8"></HEAD> <BODY></BODY></HTML> That's it. While I am rendering to the transitional DTD, the real head contains a title, etc. My development server is CF 9, production is 8. This problem has been happening in both. Seems it may only be happening on pages that are the the result of a POST action. I've never experienced this in ASP.NET (my usual development environment) using the same browsers.

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • protecting grails melody with grails filter

    - by batmannavneet
    I have an application where I am using spring security along with grails melody. I am planning to run grails melody in production environment, but don't want visitors to have access to it. How should I achieve that ? I tried creating a filter in grails (just showing the sample of what I am trying, not the actual code)- def filters = { allURIs(uri:'/**') { before = { //... if(request.forwardURI.indexOf("admin") != -1 || request.forwardURI.indexOf("monitoring") != -1) { response.sendError 404 return false } } } } But this doesnt work as the request for "monitoring" doesnt hit this filter. I dont even want the user to know that such a URL exists, so I want to check in the filter that if "monitoring" is the URL, I show the 404 error page. Thats also the reason why I dont want to protect this URL with spring security as it will show "access denied" page. Basically I want the URL to exist but they should be invisible to users. I want the access to be open to only certain IP addresses for these special URLs. On another note, Is it possible to write a grails filter that "acts" before the spring security filter is hit ? I want to be able to do some filtering before I forward requests to spring security. Writing a grails filter like above doesnt help. Spring security filter gets hit first if I access a protected resource and this filter doesn't get called. Thanks

    Read the article

  • Child sProc cannot reference a Local temp table created in parent sProc

    - by John Galt
    On our production SQL2000 instance, we have a database with hundreds of stored procedures, many of which use a technique of creating a #TEMP table "early" on in the code and then various inner stored procedures get EXECUTEd by this parent sProc. In SQL2000, the inner or "child" sProc have no problem INSERTing into #TEMP or SELECTing data from #TEMP. In short, I assume they can all refer to this #TEMP because they use the same connection. In testing with SQL2008, I find 2 manifestations of different behavior. First, at design time, the new "intellisense" feature is complaining in Management Studio EDIT of the child sProc that #TEMP is an "invalid object name". But worse is that at execution time, the invoked parent sProc fails inside the nested child sProc. Someone suggested that the solution is to change to ##TEMP which is apparently a global temporary table which can be referenced from different connections. That seems too drastic a proposal both from the amount of work to chase down all the problem spots as well as possible/probable nasty effects when these sProcs are invoked from web applications (i.e. multiuser issues). Is this indeed a change in behavior in SQL2005 or SQL2008 regarding #TEMP (local temp tables)? We skipped 2005 but I'd like to learn more precisely why this is occuring before I go off and try to hack out the needed fixes. Thanks.

    Read the article

  • How can I display an ASP.NET MVC html part from one application in another

    - by Frank Sessions
    We have several asp.net MVC apps in the following setup SecurityApp (root application - handles forms auth for SSO and has a profile edit page) Application1 (virtual directory) Application2 (virtual directory) Application3 (virtual directory) so that domain.com points to SecurityApp and domain.com/Application1 etc point to their associated virtual directories. All of our Single Sign On (SSO) is working properly using forms authentication. Based on the users permissions when logging in a menu that lists their available applications and a logout link will be generated and saved in the cache - this menu displays fine whenever the user is in the SecurityApp (editing their profile) but we cannot figure out how to get the Applications in the virtual directories to display the same application menu. We have tried: 1) Using JSONP to do an request that will return the html for the menu. The ajax call returns the HTML with the html; however, because User.IsAuthenticated is false the menu comes back empty. 2) We created a user control and include it along with the dll's for the SecurityApp project and this works; however, we dont want to have to include all the dlls for the SecurityApp project in every application that we create (along with all the app settings in the web.config) We would like this to be as simple as possible to implement so that anyone creating a new app can add the menu to their application in as few steps as possible... Any ideas? To Clarify - we are using ASP.NET MVC 1.0 since these apps are in production and we do not have the okay to go to ASP.NET MVC 2.0 (unfortunately)

    Read the article

  • What is the best way to iterate over a large result set in JDBC/iBatis 3?

    - by paul_sns
    We're trying to iterate over a large number of rows from the database and convert those into objects. Behavior will be as follows: Result will be sorted by sequence id, a new object will be created when sequence id changes. The object created will be sent to an external service and will sometimes have to wait before sending another one (which means the next set of data will not be immediately used) We already have invested code in iBatis 3 so an iBatis solution will be the best approach for us (we've tried using RowBounds but haven't seen how it does the iteration under the hood). We'd like to balance minimizing memory usage and reducing number of DB trips. We're also open to pure JDBC approach but we'd like the solution to work on different databases. UPDATE: We need to make as few calls to DB as possible (1 call would be the ideal scenario) while also preventing the application to use too much memory. Are there any other solutions out there for this type of problem may it be pure JDBC or any other technology? Thanks and hope to hear your insights on this.

    Read the article

  • Reasons for & against a Database

    - by dbemerlin
    Hi, i had a discussion with a coworker about the architecture of a program i'm writing and i'd like some more opinions. The Situation: The Program should update at near-realtime (+/- 1 Minute). It involves the movement of objects on a coordinate system. There are some events that occur at regular intervals (i.e. creation of the objects). Movements can change at any time through user input. My solution was: Build a server that runs continously and stores the data internally. The server dumps a state-of-the-program at regular intervals to protect against powerfailures and/or crashes. He argued that the program requires a Database and i should use cronjobs to update the data. I can store movement information by storing startpoint, endpoint and speed and update the position in the cronjob (and calculate collisions with other objects there) by calculating direction and speed. His reasons: Requires more CPU & Memory because it runs constantly. Powerfailures/Crashes might destroy data. Databases are faster. My reasons against this are mostly: Not very precise as events can only occur at full minutes (wouldn't be that bad though). Requires (possibly costly) transformation of data on every run from relational data to objects. RDBMS are a general solution for a specialized problem so a specialized solution should be more efficient. Powerfailures (or other crashes) can leave the Data in an undefined state with only partially updated data unless (possibly costly) precautions (like transactions) are taken. What are your opinions about that? Which arguments can you add for any side?

    Read the article

  • Looking for a good dev environment for OSGi bundles

    - by Riduidel
    Hi, I'm currently investigating in the field of dev environment for OSGi bundles. My goal is to find a way to develop, test and debug with ease the bundles I'll be coding. Besides, I have some "cultural" requirements. I want to be able to use java continuous integration servers (typically, Hudson) As a consequence of that first requirement, I want to have a repeatable, one-click build process. My typical tool for that is maven. And finally, being long-term Eclipse user, and having the m2eclipse at hand to merge my eclipse env with my maven one, I obviously want to be able to test and debug with that IDE. So far, here are the infos I know I can use (and have already tested) maven-bundle-plugin, maven-ipojo-plugin which both offer clean packaging facilities I have tested maven pax (and eclipse pax) and am not really satisfied with both : maven pax generates a very heavy project, where adding dependencies is very error-prone (the maven pax:import-bundle command line, with all its arguments, is a hell per se) I have taken a look at Karaf, which seems to have some nice direct maven provisionning, but I don't know how to integrate it with my Eclipse, besides using the traditionnal JPDA bridge. However, it seems to be more production-oriented than dev-oriented, and as such may require heavy configuration to fit my need (although the reading of its user manual doesn't revedal that). Have you got any ideas ? Some maven/eclipse plugins ?

    Read the article

  • how do i execute a stored procedure with vici coolstorage?

    - by lincolnk
    I'm building an app around Vici Coolstorage (asp.net version). I have my classes created and mapped to my database tables and can pull a list of all records fine. I've written a stored procedure where the query jumps across databases that aren't mapped with Coolstorage, however, the fields in the query result map directly to one of my classes. The procedure takes 1 parameter. so 2 questions here: how do i execute the stored procedure? i'm doing this CSParameterCollection collection = new CSParameterCollection(); collection.Add("@id", id); var result = Vici.CoolStorage.CSDatabase.RunQuery("procedurename", collection); and getting the exception "Incorrect syntax near 'procedurename'." (i'm guessing this is because it's trying to execute it as text rather than a procedure?) and also, since the class representing my table is defined as abstract, how do i specify that result should create a list of MyTable objects instead of generic or dynamic or whatever objects? if i try Vici.CoolStorage.CSDatabase.RunQuery<MyTable>(...) the compiler yells at me for it being an abstract class.

    Read the article

  • How to organize integrity tests and code unit tests?

    - by karlthorwald
    I have several files with code testing code (which uses a "unittest" class). Later I found it would be nice to test database integrity also. I put this into a separate directory tree. (Things like the keys have correct format, parent and child nodes are pointing correctly and such.) I use the same unittest class for the integrity tests. Now I wonder if it makes really sense to keep this separate. To test the integrity of data I often duplicate parts of code that I use to test the code that handles the data. But it is not the same. The code tests use test databases (that get deleted after each test) and the integrity tests connect to the live data and analyze it. The integrity tests I want to call from cron and send an alarm if something happens in the live database. How would you handle that? Are there standards for such a setup? What is your experience? My tendency is to put everything in the same file, which would result in the code tests also being executed by the cron on the production environment.

    Read the article

  • Why put a DAO layer over a persistence layer (like JDO or Hibernate)

    - by Todd Owen
    Data Access Objects (DAOs) are a common design pattern, and recommended by Sun. But the earliest examples of Java DAOs interacted directly with relational databases -- they were, in essence, doing object-relational mapping (ORM). Nowadays, I see DAOs on top of mature ORM frameworks like JDO and Hibernate, and I wonder if that is really a good idea. I am developing a web service using JDO as the persistence layer, and am considering whether or not to introduce DAOs. I foresee a problem when dealing with a particular class which contains a map of other objects: public class Book { // Book description in various languages, indexed by ISO language codes private Map<String,BookDescription> descriptions; } JDO is clever enough to map this to a foreign key constraint between the "BOOKS" and "BOOKDESCRIPTIONS" tables. It transparently loads the BookDescription objects (using lazy loading, I believe), and persists them when the Book object is persisted. If I was to introduce a "data access layer" and write a class like BookDao, and encapsulate all the JDO code within this, then wouldn't this JDO's transparent loading of the child objects be circumventing the data access layer? For consistency, shouldn't all the BookDescription objects be loaded and persisted via some BookDescriptionDao object (or BookDao.loadDescription method)? Yet refactoring in that way would make manipulating the model needlessly complicated. So my question is, what's wrong with calling JDO (or Hibernate, or whatever ORM you fancy) directly in the business layer? Its syntax is already quite concise, and it is datastore-agnostic. What is the advantage, if any, of encapsulating it in Data Access Objects?

    Read the article

  • Copy vector of values to vector of pairs in one line

    - by Kirill V. Lyadvinsky
    I have the following types: struct X { int x; X( int val ) : x(val) {} }; struct X2 { int x2; X2() : x2() {} }; typedef std::pair<X, X2> pair_t; typedef std::vector<pair_t> pairs_vec_t; typedef std::vector<X> X_vec_t; I need to initialize instance of pairs_vec_t with values from X_vec_t. I use the following code and it works as expected: int main() { pairs_vec_t ps; X_vec_t xs; // this is not empty in the production code ps.reserve( xs.size() ); { // I want to change this block to one line code. struct get_pair { pair_t operator()( const X& value ) { return std::make_pair( value, X2() ); } }; std::transform( xs.begin(), xs.end(), back_inserter(ps), get_pair() ); } return 0; } What I'm trying to do is to reduce my copying block to one line with using boost::bind. This code is not working: for_each( xs.begin(), xs.end(), boost::bind( &pairs_vec_t::push_back, ps, boost::bind( &std::make_pair, _1, X2() ) ) ); I know why it is not working, but I want to know how to make it working without declaring extra functions and structs?

    Read the article

  • Finding usage of jQuery UI in a big ugly codebase

    - by Daniel Magliola
    I've recently inherited the maintenance of a big, ugly codebase for a production website. Poke your eyes out ugly. And though it's big, it's mostly PHP code, it doesn't have much JS, besides a few "ajaxy" things in the UI. Our main current problem is that the site is just too heavy. Homepage weighs in at 1.6 Mb currently, so I'm trying to clean some stuff out. One of the main wasters is that every single page includes the jQuery UI library, but I don't think it's used at all. It's definitely not being used in the homepage and in most pages, so I want to only include the where necessary. I'm not really experienced with jQuery, i'm more of a Prototype guy, so I'm wondering. Is there anything I could search for that'd let me know where jQuery UI is being used? What i'm looking for is "common strings", component names, etc For example, if this was scriptaculous, i'd look for things like "Draggable", "Effect", etc. Any suggestions for jQuery UI? (Of course, if you can think of a more robust way of removing the tag from pages that don't use it without breaking everything, I'd love to hear about it) Thanks!! Daniel

    Read the article

  • Multiple database with NHibernate

    - by Flint
    Hi, I have two databases. One from Oracle 10g. Another from Mysql. I have configured my web application with Nhibernate for Oracle and now I am in need of using the MySQL database. So how can i configure the hibernate.cfg.xml so that i can use both of the database at the same application? My current hibernate.cfg.xml is: <?xml version="1.0" encoding="utf-8" ?> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.driver_class">NHibernate.Driver.OracleClientDriver</property> <property name="connection.connection_string">Data Source=xe;Persist Security Info=True;User ID=hr;Password=hr;Unicode=True</property> <property name="show_sql">false</property> <property name="dialect">NHibernate.Dialect.Oracle9Dialect</property> <!-- mapping files --> <mapping assembly="DataTransfer" /> </session-factory> </hibernate-configuration>

    Read the article

  • How do I fix a broken connection to DB2 from a web application?

    - by Eddie White
    I support some old web applications, VBScript-based ASP for the UI and VB6 COM modules for the business and data access layers. Last weekend, I installed DB2 Connect Enterprise Edition v8 fixpack 14 on several Windows 2000 servers, and one of the web apps errors out on null data when it calls the built in VBScript function FormatNumber. This numeric data is retrieved by a SQL Server query, but the only way the SQL Server column is populated is with the calculated results returned from a DB2 query earlier in a progression through several pages. When I installed DB2 Connect EE, one of the components loaded was MDAC 2.7. I followed corporate instructions and had the installation save an ODBC System Data Source, which reported a good connection when I tested it after the install. For what it's worth, the project references in the production VB6 modules pointed to MDAC 2.5. I have tried recompiling and deploying to COM on my test server new versions of the VB6 modules referencing MDAC 2.7. My development environment is Windows XP Pro, with MDAC 2.8 and DB2 Connect EE v9.5 installed. When I deployed the updated VB6 dlls, the CreateObject fails to instantiate the classes with the error message that "The class does not support automation or the requested interface". I've rolled the DB2 Connect install back and have reinstall v8 of the DB2 runtime client, which was the previous environment. The problem, however, persists.

    Read the article

  • How to deploy SQL Reporting 2005 when Data Sources are locked?

    - by spoulson
    The DBAs here maintain all SQL Server and SQL Reporting servers. I have a custom developed SQL Reporting 2005 project in Visual Studio that runs fine on my local SQL Database and Reporting instances. I need to deploy to a production server, so I had a folder created on a SQL Reporting 2005 server with permissions to upload files. Normally, a deploy from within Visual Studio is all that is needed to upload the report files. However, for security purposes, data sources are maintained explicitly by DBAs and stored in a separated locked down common folder on the reporting server. I had them create the data source for me. When I attempt to deploy from VS, it gives me the error "The item '/Data Sources' already exists." I get this whether I'm deploying the whole project or just a single report file. I already set OverwriteDataSources=false in the project properties. The TargetServer URL and folder are verified correct. I suppose I could copy the files manually, but I'd like to be able to deploy from within VS. What could I be doing wrong?

    Read the article

  • delivery mechanism, Rational ClearCase

    - by kadaba
    Hi All, We came up with a stream structure for the Rational ClearCase UCM model. We recently migrated the code base into the new setup. We had three different code bases, i.e. three physical code bases. The way migration was done in this way. we moved the production code first, created a baseline. Then the uat code and created a baseline and then the development code and created a baseline. As of now the integration stream has the latest baseline that is the development baseline. Now we have other two streams for the prd and the uat from which the release will be done in the respective environments. I have my dev stream now. I create an activity and make some changes. now I need to promote these changes into the uat environment. If I deliver the changes to the integration stream, merge is done but on a development basline. I do not want to rebase it to uat as many development apps wil get rebased into the uat which is not desired. How do I achieve promoting changes to the uat environment(uat stream). kindly advice.

    Read the article

  • Sinatra application running on Dreamhost suddenly not working

    - by jbrennan
    My Sinatra application was running fine on Dreamhost until a few days ago (I'm not sure precisely when it went bad). Now when I visit my app I get this error: can't activate rack (~> 1.1, runtime) for ["sinatra-1.1.2"], already activated rack-1.2.1 for [] I have no idea how to fix this. I've tried updating all my gems, then touching the app/tmp/restart.txt file, but still no fix. I hadn't touched any files of my app, nor my Dreamhost account. It just busted on its own (my guess is DH changed something on their server which caused the bust). When I originally deployed my app, I had to go through some hoops to get it working, and I seem to think I was using gems in a custom location, but I can't remember exactly where or how. I don't know my way around Rack/Passenger very well. Here's my config.ru: (mostly grafted from around the web, I don't fully understand it) ENV['RACK_ENV'] = 'development' if ENV['RACK_ENV'].empty? #### Make sure my own gem path is included first ENV['GEM_HOME'] = "#{ENV['HOME']}/.gems" ENV['GEM_PATH'] = "#{ENV['HOME']}/.gems:" require 'rubygems' Gem.clear_paths ## NB! key part require 'sinatra' set :env, :production disable :run require 'MY_APP_NAME.rb' run Sinatra::Application

    Read the article

< Previous Page | 264 265 266 267 268 269 270 271 272 273 274 275  | Next Page >