Search Results

Search found 2384 results on 96 pages for 'vb6 migration'.

Page 77/96 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • How to use acts-as-commentable-with-threading in Rails

    - by alex
    Hi All, I am developing my first rails site (yup, i am a rails idiot). I'm writing a blog, and i got to the comments part. I installed acts-as-commentable-with-threading ( GitHub ), i made and ran the migration like the install instructions said. I have added acts_as_commentable to my Posts model and i have a Comments controller When i add @comment = Comment.build_from(params[:id],1, params[:body] ) I get the error. undefined method `build_from' for # Clearly i am doing something terribly wrong, and i don't really get the example. What should i be feeding to build_from? Can somebody explain this plugin step by step? :) Or is there an easier way to get simple threaded comments?

    Read the article

  • Running Visual Studio 2005, 2008, and 2010 on same system.

    - by thelsdj
    I have around 50 projects in Visual Studio 2005 that I am building a new development machine for and I'd like to slowly move those projects to VS 2008 but also have 2010 available for select new projects. Can this work? Are there any gotchas for this sort of setup? Any general advice for running multiple versions of Visual Studio on the same system would be greatly appreciated. Specifically related to managing a controlled migration of projects to new versions but being able to selectively keep some on old versions.

    Read the article

  • What to look for in estimating a PowerBuilder Conversion Project?

    - by tekiegreg
    Hi there, I've been trying to do a spec for a PowerBuilder 9 to 11.5 migration of a relatively complex application. Granted PowerBuilder is not really my specialty I'm having issues trying to justify an estimate for this part of the project (and the PowerBuilder people I've been talking with have had some personal issues lately and are out of communication). These are some of the metrics that we have seen and can evaluate: -PBL Files -Main Windows -Data Windows -Functions (no we don't have the source available on this project) What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

    Read the article

  • Core Data store corruption

    - by sehugg
    A handful of customers for my iPhone app are experiencing Core Data store corruption (I assume so, since the error is "Failed to save to data store: Operation could not be completed. (Cocoa error 259.)") Has anyone else experienced this kind of store corruption? I am worried since I aim to soon push an update which performs a schema migration, and I am worried that this will expose even more problems. I had assumed that the Core Data/SQLlite APIs use atomic operations and are immune to corruption except if the underlying filesystem experiences corruption. Is there a way to reduce/prevent corruption, or at least a good way to reproduce (I have been unsuccessful thus far).

    Read the article

  • SQLServer Binary Data with ActiveRecord and JDBC

    - by John Duff
    I'm using the activerecord-jdbc-adapter with ActiveRecord to be able to access a SQLServer database for Rails Application running under jRuby and am having trouble inserting binary data. The Exception I am getting is below. Note I just have a blurb for the binary data from the fixtures that was working fine for MySQL. ActiveRecord::StatementInvalid: ActiveRecord::ActiveRecordError: Operand type clash: nvarchar is incompatible with image: INSERT INTO blobstorage_datachunks ([id], [datafile_id], [chunk_number], [data]) VALUES (369397133, 663419003, 0, N'GIF89a@') When I created the tables the migration had binary and SQLServer used Image instead. We're using Rails 2.3.5, SQLServer Express 2008. What I'm looking for is a way to get the binary data into SQLServer with ActiveRecord. Thanks in advance for the help.

    Read the article

  • Ruby on rails link_to syntax

    - by mizipzor
    After following a tutorial Ive found. Im now redoing it again, without the scaffolding part, to learn it better. However, editing my \app\views\home\index.html.erb to contain: <h1>Rails test project</h1> <%= link_to "my blog", posts_path> I get an error: undefined local variable or method `posts_path' for #<ActionView::Base:0x4e1d954> Before I did this, I ran rake db:create, defined a migration class and ran rake db:migrate, everything without a problem. So the database should contain a posts table. But that link_to command cant seem to find posts_path. That variable (or is it even a function?) is probably defined through the scaffold routine. My question now is; how do I do that manually myself, define posts_path?

    Read the article

  • Get In That DB! Parsing CSV Using Ruby...

    - by keruilin
    I have a CSV file formatted just like this: name,color,tasty,qty apple,red,true,3 orange,orange,false,4 pear,greenish-yellowish,true,1 As you can see, each column in the Ruby OO world represents a mix of types -- string, string, boolean, int. Now, ultimately, I want to parse each line in the file, determine the appropriate type, and insert that row into a database via a Rails migration. For ex: Fruit.create(:name => 'apple', :color => 'red', :tasty => true, :qty => 3) Help!

    Read the article

  • iPhone: Core Data: Updating a pre-filled database in future app versions.

    - by Cuzog
    I am creating an app with a database of information that needs to be pre-filled. This data will change in future versions. In the same database, I also need to store user editable information since that user edited data directly relates to the pre-filled data. My question is, if I'm pre-filling the database by creating a duplicate data model in a second app and copying over the core data file before release, how would I handle updates to that data in future versions of the app without destroying the user's existing data? Do the core data migration methods handle this, or must I write custom methods to programatically handle the merge at first app launch?

    Read the article

  • pylint ignore by directory

    - by Ciantic
    Following is from pylint docs: --ignore=<file> Add <file or directory> to the black list. It should be a base name, not a path. You may set this option multiple times. [current: %default] Yet I'm not having luck getting the directory part work. I have directory called migrations, which has django-south migration files. As I enter --ignore=migrations it still keeps giving me the errors/warnings in files inside migrations directory. Could it be that --ignore is not working for directories? If I could even use regexp to match the ignored files it would work, since django-south files are all named 0001_something, 0002_something...

    Read the article

  • Import OLE Object from Access to MySQL

    - by SecretDeveloper
    I have a table in an access table which contains Product entries, one of the columns has a jpg image stored as an OLE Object. I am trying to import this table to MySQL but nothing seems to work. I have tried the MySQL migration tool but that has a known issue with Access and OLE Objects. (The issue being it doesnt work and leaves the fields blank) I also tried the suggestion on this site and while the data is imported it seems as though the image is getting corrupted in the transfer. When i try to preview the image i just get a binary view, if i save it on disk as a jpg image and try to open it i get an error stating the image is corrupt. The images in Access are fine and can be previewed. Access is storing the data as an OLE Object and when i import it to MySql it is saved in a MediumBlob field. Has anyone had this issue before and how did they resolve it ?

    Read the article

  • process taking high time on Itanium server

    - by Vishwa
    Hi, we have an application which is recently migrated from PA-Risk server to itanium server. After migration we noticed that there is significant increase in time taken to complete a process. when we tracked the time taken by each part we found that the system time is 7.590000, user time is 3.990000 but elapsed time is 70.434882!! Due to this huge elapsed time, overall performance of the application has gone very low. i wrote a small progrms to perform I/O operations on Itanium. these scripts working faster on Itanium compared to PA-Risk. what could be the reason for this high elapsed time of this process?

    Read the article

  • Question about migrating from ASP .NET User Controls to .NET 3.5 Master Page technology

    - by Jim McFetridge
    When migrating from an ASP .NET user control -based page with a header, footer, and menu to a Master Page using the same HTML mark-up, is it normal for CSS or javascript behaviors to change slightly? In particular, the submenu bar text now appears run together (which looks like a CSS symptom) and the graphics on the line above it appear to have an incorrect z-order. (The menu operation is javascript-based.) (I tried to paste images here but couldn't.) Also, the site is very large and we've not been given permission to redo the menu for the entire site. This is a forward-only migration. (Because I know that someone will ask.) Assuming that there are no changes in scope, what are the things that I should check? Thanks! Jim

    Read the article

  • Robots Crawling Across Namespace?

    - by Codex73
    I migrated site from one domain to another. Also placed permanent redirection on old account. My stats logs are capturing this: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) /libro_metaboforte_chap5.php/members/members/file_chap6.php I placed this on robots which wasn't present at time of migration. Robots.txt Contents User-agent: * Allow: / Disallow: /members/ Disallow: /includes/ HTACCESS FILE CONTENTS DirectoryIndex index.php index.html Options +FollowSymlinks RewriteEngine On # Turn on the rewriting engine RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^/store/?$ RewriteCond %{QUERY_STRING} !. RewriteRule ^.+/?$ index.php [QSA,L] RewriteCond %{QUERY_STRING} ^curlang=([a-z]*)$ RewriteRule ^.+/?$ index.php? [QSA,L] Will continue to log incoming bot captures. My htaccess does rewrite. I just added the robot file. The funny part is that is stepping in double directories... I don't know if the problem was not having the 'robots.txt' in place or the actual in place htaccess doing rewrites?

    Read the article

  • Sync GIT and ClearCase

    - by Senthil A Kumar
    I am currently working on ClearCase and now migrating to GIT. But we need this migration in a way that all work will be done in GIT and the data will be synced backed to ClearCase stream. We will have the same branch names and stream names in both GIT and CC, so scripting shouldn't be a problem. The problem here is, Can someone suggest which is the best model to sync CC and GIT Have all the Vobs in CC as single repo in GIT, and have the major stream in CC as various branches in GIT. - Single GIT repo (VOBS) and many branches (CC streams). - This takes up less space as VOBs are kept as single repo with many branches. Have important CC branches as independent GIT repositories and each repository having all the CC VOBs. - Many GIT repo for many CC branch - This will take up lots of space as VOBs will be replicated across. Which do you think is the best way to keep it in sync with ClearCase

    Read the article

  • Facing Problem with Libxml.rb

    - by Rakesh
    Hi Folks, I downloaded one of the rails app from server and tried running it.I faced an error on migration D:\Radiant\trunkrake db:migrate (in D:/Radiant/trunk) rake aborted! 126: The specified module could not be found. - D:/Radiant/trunk/config/../ven dor/plugins/libxml-ruby/lib/libxml_ruby.so I got the windows version of libxml_ruby.so and tried again. i got the error saying " libxml2.dll " not found.Then i checked the file libxml.rb ..it says the below lines . If running on Windows, then add the current directory to the PATH for the current process so it can find the pre-built libxml2 and iconv2 shared libraries (dlls). Can anybody explain me what actually this mean? Regards Rakesh S

    Read the article

  • Ruby gems gone after after jruby install

    - by James
    Today I installed jruby by downloading it, extracting it to /home/james/jruby-1.4.0 and adding the following line to .bashrc export JRUBY_HOME=/home/james/jruby-1.4.0 export PATH=$JRUBY_HOME/bin:$PATH And then I installed some jruby gems via jruby -S gem install ... Jruby works fine, but this seemed to have cause two problems: 1) When I try to run a ruby (not jruby) on rails migration, I see: Missing the Rails gem. Please `gem install -v= rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed. 2) when I do gem list --local, I only see the gems that I've installed for jruby. Launching web applications via ruby script/server succeeds without any warnings. Please help.

    Read the article

  • Globalize2 and migrations

    - by diegogs
    Hi, I have used globalize2 to add i18n to an old site. There is already a lot of content in spanish, however it isn't stored in globalize2 tables. Is there a way to convert this content to globalize2 with a migration in rails? The problem is I can't access the stored content: >> Panel.first => #<Panel id: 1, name: "RT", description: "asd", proje.... >> Panel.first.name => nil >> I18n.locale = nil => nil >> Panel.first.name => nil Any ideas?

    Read the article

  • Where are TFS Alerts stored in the TFS Databases? Receiving duplicate alerts after upgrade 2008 to

    - by MJ Hufford
    I recently performed a migration-upgrade from TFS 2008 to TFS 2010. Almost everything is working properly now. However, our team is getting duplicate emails now. I'm guessing this is because I used the TFS 2008 power tools to setup alerts. After the upgrade, I installed the TFS 2010 power tools and noticed that there were not alerts configured. I setup new alerts and now we get duplicates. Is it possible the old alerts configuration is floating around in the db somewhere?

    Read the article

  • rails: date type and GetDate

    - by cbrulak
    This is a follow up to this question: http://stackoverflow.com/questions/2930256/unique-responses-rails-gem I'm going to create an index based on the user id, url and a date type. I want date type (not datetime type) because I want the day, the 24 hour day to be part of the index to avoid duplication of page views counts on the same day. In other words: A view only counts once in a day by a visitor. I also want the default value of that column (viewdate) to be the function GETDATE(). This is what I have in my migration: execute "ALTER TABLEpage_viewsADD COLUMN viewdate datetime DEFAULTGETDATE()`" But the value viewdate is always empty. What am I missing? (as an aside, any other suggestions for accomplishing this goal?)

    Read the article

  • Any way of working with Eclipse WTP that does not mean redeploying the _WHOLE_ application when a J

    - by Thorbjørn Ravn Andersen
    I have migrated a Web Application from MyEclipse to Eclipse WTP, and I am now in the middle of the first major upgrade to the code base and web pages after the migration, and it is frankly driving me mad that saving a JSP page causes a redeployment of the WHOLE application, as it takes time and that my backend connection does not survive the serialization-deserialization of the session object (which is non-trivial to fix). In addition to that the JSP-editor is insanely slow so I frequently have to pause to let the editor catch up to be certain where my edits go in a small JSP using JavaServer Faces. Disabling validation did not help. The Eclipse Dynamic Web Project depends on several library eclipse projects so I cannot just tell e.g. Jetty to use the WebRoot folder, as several dependencies are then missing from the classpath. The question is: Is there a way of working - ANY way of working - with the Eclipse WTP system that does NOT imply redeploying everything every time any file is saved? I can use Tomcat 5.5 or Jetty 6 as servers.

    Read the article

  • What is the compatibility on .NET 4.0?

    - by Juan Manuel Formoso
    We have several .NET applications developed in .NET 3.5 (Windows services, web applications, and WCF services) in different servers. I'd like to migrate to .NET 4.0 and use VS.NET 2010. Does VS.NET 2010 compiles to .NET 3.5 to avoid full simultaneous migration, being able to stop using VS.NET 2008 but maintaining some applications in the previous version? Can I uninstall the .NET < 4.0 runtime and have only .NET 4.0 in my servers? Does it run applications compiled to previous framework versions?

    Read the article

  • NSEntityDescription entityForName returning nil

    - by Kamchatka
    Hi, I did some changes to my model (but I don't want migration yet, so I just remove the application, built clean etc.) so my application works in the simulator. However, when I run it on the iPhone, I get the following error: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'executeFetchRequest:error: A fetch request must have an entity.' I set the entity like this: NSEntityDescription *entity = [NSEntityDescription entityForName:@"Document" inManagedObjectContext:managedObjectContext]; My managedObjectContext is not nil. But I suspect that it doesn't load the object model correctly or something similar because If I display the entities in the model, the list is empty. How can I make sure the model is loaded? Thanks,

    Read the article

  • How do you use pip, virtual_env and Frabric to handle deployement?

    - by e-satis
    What are your settings, your tricks, and above all, your workflow. These tools are great but they is still no best practices attached to their usage, so I don't know what is the most efficient way. Do you use pip bundles or always download? Do you set up Apache/Cherokee/MySQl by hand or do you have a script for than. Do you put everything in virtual_env and use --no-site-package? Do you use one virtual_env for several projects? What do you use Fabric for (which part of your deployment do you script)? Do you put your Fabric scripts in on the client or the server? How do you handle database and media file migration? Do you even need a build tool such as SCons? What are the steps of your deployment? How often do you perform each of them? etc.

    Read the article

  • What are good Python and/or Django deployment solutions?

    - by e-satis
    For now I use some mix between virtual_env, pip and Fabric. This allows to: install required libs; generate dynamic content; isolate installation; push everything through ssh. It works well, I just want to know if there are other tools around. The only problem I could think of is that it's a lot of to set up every time. It doesn't solve database / media files migration issues either, but maybe I should just open another question for this specific subject. Eventually, I don't know how to automatize the server setup. I'd love to have a tool to let me configure Apache/Lighttp/Cherokee and MySQL automatically. Related : How django projects can be deployed with minimal installation works?

    Read the article

  • Help required in adding new methods, properties into existing classes dynamically

    - by Bepenfriends
    Hi All, I am not sure whether it is possible to achieve this kind of implementation in Dot Net. Below are the information Currently we are on an application which is done in COM+, ASP, XSL, XML technologies. It is a multi tier architecture application in which COM+ acts as the BAL. The execution steps for any CRUD operation will be defined using a seperate UI which uses XML to store the information. BAL reads the XML and understands the execution steps which are defined and executes corresponding methods in DLL. Much like EDM we have our custom model (using XML) which determines which property of object is searchable, retrievable etc. Based on this information BAL constructs queries and calls procedures to get the data. In the current application both BAL and DAL are heavily customizable without doing any code change. the results will be transmitted to presentation layer in XML format which constructs the UI based on the data recieved. Now I am creating a migration project which deals with employee information. It is also going to follow the N Tier architecture in which the presentation layer communicates with BAL which connects to DAL to return the Data. Here is the problem, In our existing version we are handling every information as XML in its native form (no converstion of object etc), but in the migration project, Team is really interested in utilizing the OOP model of development where every information which is sent from BAL need to be converted to objects of its respective types (example employeeCollection, Address Collection etc). If we have the static number of data returned from BAL we can have a class which contains those nodes as properties and we can access the same. But in our case the data returned from our BAL need to be customized. How can we handle the customization in presentation layer which is converting the result to an Object. Below is an example of the XML returned <employees> <employee> <firstName>Employee 1 First Name</firstName> <lastName>Employee 1 Last Name</lastName> <addresses> <address> <addressType>1</addressType> <StreetName>Street name1</StreetName> <RegionName>Region name</RegionName> <address> <address> <addressType>2</addressType> <StreetName>Street name2</StreetName> <RegionName>Region name</RegionName> <address> <address> <addressType>3</addressType> <StreetName>Street name3</StreetName> <RegionName>Region name</RegionName> <address> <addresses> </employee> <employee> <firstName>Employee 2 First Name</firstName> <lastName>Employee 2 Last Name</lastName> <addresses> <address> <addressType>1</addressType> <StreetName>Street name1</StreetName> <RegionName>Region name</RegionName> <address> <address> <addressType>2</addressType> <StreetName>Street name2</StreetName> <RegionName>Region name</RegionName> <address> <addresses> </employee> </employees> If these are the only columns then i can write a class which is like public class Address{ public int AddressType {get;set;}; public string StreetName {get;set;}; public string RegionName {get;set;}; } public class Employee{ public string FirstName {get; set;} public string LastName {get; set;} public string AddressCollection {get; set;} } public class EmployeeCollection : List<Employee>{ public bool Add (Employee Data){ .... } } public class AddressCollection : List<Address>{ public bool Add (Address Data){ .... } } This class will be provided to customers and consultants as DLLs. We will not provide the source code for the same. Now when the consultants or customers does customization(example adding country to address and adding passport information object with employee object) they must be able to access those properties in these classes, but without source code they will not be able to do those modifications.which makes the application useless. Is there is any way to acomplish this in DotNet. I thought of using Anonymous classes but, the problem with Anonymous classes are we can not have methods in it. I am not sure how can i fit the collection objects (which will be inturn an anonymous class) Not sure about datagrid / user control binding etc. I also thought of using CODEDom to create classes runtime but not sure about the meory, performance issues. also the classes must be created only once and must use the same till there is another change. Kindly help me out in this problem. Any kind of help meterial/ cryptic code/ links will be helpful.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >