Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 27/319 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How to get stack trace information for logging in production when using the GAC

    - by Jonathan Parker
    I would like to get stack trace (file name and line number) information for logging exceptions etc. in a production environment. The DLLs are installed in the GAC. Is there any way to do this? This article says about putting PDB files in the GAC: You can spot these easily because they will say you need to copy the debug symbols (.pdb file) to the GAC. In and of itself, that will not work. I know this article refers to debugging with VS but I thought it might apply to logging the stacktrace also. I've followed the instructions for the answer to this question except for unchecking Optimize code which they said was optional. I copied the dlls and pdbs into the GAC but I'm still not getting the stack trace information. Here's what I get in the log file for the stack trace: OnAuthenticate at offset 161 in file:line:column <filename unknown>:0:0 ValidateUser at offset 427 in file:line:column <filename unknown>:0:0 LogException at offset 218 in file:line:column <filename unknown>:0:0 I'm using NLog. My NLog layout is: layout="${date:format=s}|${level}|${callsite}|${identity}|${message}|${stacktrace:format=Raw}" ${stacktrace:format=Raw} being the relevant part.

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • VSDBCMD deployment for additions to third party databases

    - by Sam
    We have some custom objects (stored procedures etc) in an SQL Server 2005 database belonging to an ERP system. The custom objects are in different schemas to the ERP objects. We're using Database Edition .dbproj projects and vsdbcmd deployment for all our custom application databases and would like to similarly manage our custom objects in the ERP database. It's not clear how this can be done without either: Importing all ERP objects (~4000 tables) into the .dbproj and manually keeping them in sync with ERP development. Visual Studio fell over the only time I tried importing these, so I've no idea whether it can actually handle a project of this size. Somehow excluding the ERP schemas (there are two) from the diff process to ensure they don't get dropped by vsdbcmd. I haven't found any documentation which suggests this is possible. I'm aware of the IgnoreDefaultSchema setting, but there are two schemas I need to ignore and I'm not comfortable with the 'default schema' approach - deployment by different users could be disasterous. Has anyone managed to successfully use .dbproj & vsdbcmd for custom additions to a third party database? If not, how do you manage SQL source control & deployment?

    Read the article

  • php - disconnecting and connecting to multiple databases

    - by Phil Jackson
    Hi, I want to be able to switch from the current db to multiple dbs though a loop: $query = mysql_query("SELECT * FROM `linkedin` ORDER BY id", $CON ) or die( mysql_error() ); if( mysql_num_rows( $query ) != 0 ) { $last_update = time() / 60; while( $rows = mysql_fetch_array( $query ) ) { $contacts_db = "NNJN_" . $rows['email']; // switch to the contacts db mysql_select_db( $contacts_db, $CON ); $query = mysql_query("SELECT * FROM `linkedin` WHERE token = '" . TOKEN . "'", $CON ) or die( mysql_error() ); if( mysql_num_rows( $query ) != 0 ) { mysql_query("UPDATE `linkedin` SET last_update = '{$last_update}' WHERE token = '" . TOKEN . "'", $CON ) or die( mysql_error() ); }else{ mysql_query("INSERT INTO `linkedin` (email, token, username, online, away, last_update) VALUES ('" . EMAIL . "', '" . TOKEN . "', '" . USERNAME . "', 'true', 'false', '$last_update')", $CON ) or die( mysql_error() ); } } mysql_free_result( $query ); } // switch back to your own mysql_select_db( USER_DB, $CON ); It does insert and update details from the other databases but it also inserts and edits data from the current users database which I dont want. Any ideas?

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • How to catch HttpRequestValidationException in production

    - by bruno
    Hello all, I have this piece of code to handle the HttpRequestValidationException in my global.asax.cs file. protected void Application_Error(object sender, EventArgs e) { var context = HttpContext.Current; var exception = context.Server.GetLastError(); if (exception is HttpRequestValidationException) { Response.Clear(); Response.StatusCode = 200; Response.Write(@"<html><head></head><body>hello</body></html>"); Response.End(); return; } } If I debug my webapplication, it works perfect. But when i put it on our production-server, the server ignores it and generate the "a potentially dangerous request.form value was detected from the client" - error page. I don't know what happens exactly... If anybody knows what the problem is, or what i do wrong..? Also I don't want to set the validaterequest on false in the web.config. The server uses IIS7.5, And I'm using asp.net 3.5. Thanks, Bruno

    Read the article

  • Can't install egenix-mx-base on Django production VPS

    - by Shane
    I have been following these instructions for setting up a Django production server with postgres, apache, nginx, and memcache. My problem is that I cannot get egenix-mx-base to install and without this I cannot get psycopg2 to work and therefore no database access :(. I am attempting this on a VPS running a clean install of Ubuntu Hardy (8.04) and have followed all instructions on the site to a T. The error message is as follows: $ easy_install egenix-mx-base Searching for egenix-mx-base Reading http://pypi.python.org/simple/egenix-mx-base/ Reading http://www.egenix.com/products/python/mxBase/ Reading http://www.lemburg.com/python/mxExtensions.html Reading http://www.egenix.com/ Best match: egenix-mx-base 3.1.3 Downloading http://downloads.egenix.com/python/egenix-mx-base-3.1.3.tar.gz Processing egenix-mx-base-3.1.3.tar.gz Running egenix-mx-base-3.1.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-iF7qzl/egenix-mx-base-3.1.3/egg-dist-tmp-laxvcS Warning: Can't read registry to find the necessary compiler setting Make sure that Python modules _winreg, win32api or win32con are installed. In file included from mx/TextTools/mxTextTools/mxte.c:42: mx/TextTools/mxTextTools/mxte_impl.h: In function ‘mxTextTools_TaggingEngine’: mx/TextTools/mxTextTools/mxte_impl.h:345: warning: pointer targets in initialization differ in signedness mx/TextTools/mxTextTools/mxte_impl.h:364: warning: pointer targets in initialization differ in signedness mx/URL/mxURL/mxURL.c: In function ‘mxURL_SetFromString’: mx/URL/mxURL/mxURL.c:676: warning: pointer targets in initialization differ in signedness mx/UID/mxUID/mxUID.c: In function ‘mxUID_Verify’: mx/UID/mxUID/mxUID.c:333: warning: pointer targets in passing argument 1 of ‘sscanf’ differ in signedness mx/UID/mxUID/mxUID.c: In function ‘mxUID_New’: mx/UID/mxUID/mxUID.c:462: warning: pointer targets in passing argument 1 of ‘mxUID_CRC16’ differ in signedness error: Setup script exited with error: build/bdist.linux-x86_64-py2.5_ucs4/dumb/egenix_mx_base-3.1.3-py2.5.egg-info: Is a directory Thank you to anyone who takes the time to try to help me.

    Read the article

  • production vs dev server content-disposition filename encoding

    - by rgripper
    I am using asp.net mvc3, download file in the same browser (Chrome 22). Here is the controller code: [HttpPost] public ActionResult Uploadfile(HttpPostedFileBase file)//HttpPostedFileBase file, string excelSumInfoId) { ... return File( result.Output, "application/vnd.ms-excel", String.Format("{0}_{1:yyyy.MM.dd-HH.mm.ss}.xls", "????????????", DateTime.Now)); } On my dev machine I download a programmatically created file with the correct name "????????????_2012.10.18-13.36.06.xls". Response: Content-Disposition:attachment; filename*=UTF-8''%D0%A1%D1%83%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_2012.10.18-13.36.06.xls Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:06 GMT Server:ASP.NET Development Server/10.0.0.0 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 And from production server I download a file with the name of the controller's action + correct extension "Uploadfile.xls", which is wrong. Response: Content-Disposition:attachment; filename="=?utf-8?B?0KHRg9C80LzQuNGA0L7QstCw0L3QuNC1XzIwMTIuMTAuMTgtMTMuMzYu?=%0d%0a =?utf-8?B?NTUueGxz?=" Content-Length:203776 Content-Type:application/vnd.ms-excel Date:Thu, 18 Oct 2012 09:36:55 GMT Server:Microsoft-IIS/7.5 X-AspNet-Version:4.0.30319 X-AspNetMvc-Version:3.0 X-Powered-By:ASP.NET Web.config files are the same on both machines. Why does filename gets encoded differently for the same browser? Are there any kinds of default settings in web.config that are different on machines that I am missing?

    Read the article

  • Why do my CouchDB databases grow so fast?

    - by konrad
    I was wondering why my CouchDB database was growing to fast so I wrote a little test script. This script changes an attributed of a CouchDB document 1200 times and takes the size of the database after each change. After performing these 1200 writing steps the database is doing a compaction step and the db size is measured again. In the end the script plots the databases size against the revision numbers. The benchmarking is run twice: The first time the default number of document revision (=1000) is used (_revs_limit). The second time the number of document revisions is set to 1. The first run produces the following plot The second run produces this plot For me this is quite an unexpected behavior. In the first run I would have expected a linear growth as every change produces a new revision. When the 1000 revisions are reached the size value should be constant as the older revisions are discarded. After the compaction the size should fall significantly. In the second run the first revision should result in certain database size that is then keeps during the following writing steps as every new revision leads to the deletion of the previous one. I could understand if there is a little bit of overhead needed to manage the changes but this growth behavior seems weird to me. Can anybody explain this phenomenon or correct my assumptions that lead to the wrong expectations?

    Read the article

  • Creating an SQL variable character column > 255 characters supporting multiple databases

    - by Piers
    I have an application that stores data through an ODBC data source of the user's choosing. So far it has worked well on a range of database systems (e.g. JET, Oracle, SQL Server), as the SQL syntax is fairly simple. Now I am running into a problem where I need to store more than 255 characters in my strings. Previously I created the table using column type VARCHAR (255). Now if I try to create a table using, e.g. VARCHAR (512) then it falls over on Access databases. I know that I can use the MEMO type for Access, but this is non-standard SQL and will thus likely fail on other database systems (e.g. Oracle). Is there any widely supported SQL standard for creating text columns wider than 255 characters, or do I need to find another solution? The alternatives seem to me to be: 1) Profile the database system and customise the SQL CREATE TABLE command based on the database system. I don't like this as it defeats the purpose of using ODBC. 2) Add extra columns of 255 chars as required (e.g. LONGSTRING1, LONGSTRING2, ...) and concatenate after reading. I don't like this because it means the number of columns can vary between tables and it complicates read/write. Are there any other viable alternatives to these two options? Or is it possible to have an SQL compliant CREATE TABLE command supported by the majority of database vendors, that supports strings longer than 255 chars?

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • ASP .NET page runs slow in production

    - by Brandi
    I have created an ASP .NET page that works flawlessly and quickly from Visual Studio. It does a very large database read from a database on our network to load a gridview inside of an update panel. It displays progress in an Ajax modalpopupextender. Of course I don't expect it to be instant what with the large db reads, but it takes on the order of seconds, not on the order of minutes. This is all working great until I put it up on the server - it is very, VERY slow when I access it via the internet - takes several minutes to load the database information into the gridview. I'm baffled why it would not perform the exact same as it had from Visual Studio. (It is in release mode and I have taken off the debug flag) I have since been trying things like eliminating unneeded update panels and throwing out the ajax tool. Nothing has made it any faster on production. It is not the database as far as I know, since it has been consistently fast from my computer (from visual studio) and consistently slow from the server. I am wondering, where do I look next? Has anyone else had this problem before? Could this be caused by update panels or Ajax modalpopupextenders in different parts of the application? Why would the live behaviour differ so much from the localhost behaviour? Both the server with the ASP .NET page and the server with the database are servers on our network. I'm using Visual Studio 2008. Thank you in advance for any insight or advice.

    Read the article

  • PHP/MySQL - Working with two databases, one shared and one local to an instance of application

    - by Extrakun
    The situation: Using a off-the-shelf PHP application, I have to add in a new module for extra functionality. Today, it is made known that eventually four different instances of the application are to be deployed, but the data from the new functionality is to be shared among those 4 instances. Each instance should still have their own database for users, content and etc. So the data for the new functionality goes into a 'shared' database. The data for the application (user login, content, uploads) go into a 'local' database To make things more complex, the new module I am writing will fetch data from the local DB and the shared DB at the same time. A re-write of the base application will take too long. I only have control over the new module which I am writing. The ideal solution: Is there a way to encapsulate 2 databases into one name using MySQL? I do not wish to switch DB connections or specifically name the DB to query from inside my SQL statements. The application uses a DB wrapper, so I am able to change it somehow so I can invisibly attempt to read/write to two different DB. What is the best way to handle this problem?

    Read the article

  • Moving information between databases

    - by williamjones
    I'm on Postgres, and have two databases on the same machine, and I'd like to move some data from database Source to database Dest. Database Source: Table User has a primary key Table Comments has a primary key Table UserComments is a join table with two foreign keys for User and Comments Dest looks just like Source in structure, but already has information in User and Comments tables that needs to be retained. I'm thinking I'll probably have to do this in a few steps. Step 1, I would dump Source using the Postgres Copy command. Step 2, In Dest I would add a temporary second_key column to both User and Comments, and a new SecondUserComments join table. Step 3, I would import the dumped file into Dest using Copy again, with the keys input into the second_key columns. Step 4, I would add rows to UserComments in Dest based on the contents of SecondUserComments, only using the real primary keys this time. Could this be done with a SQL command or would I need a script? Step 5, delete the SecondUserComments table and remove the second_key columns. Does this sound like the best way to do this, or is there a better way I'm overlooking?

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • Fetch the most viewed data in databases

    - by Erik Edgren
    I want to get the most viewed photo from the database but I don't know how I shall accomplish this. Here's my SQL at the moment: SELECT * FROM photos AS p, viewers AS v WHERE p.id = v.id_photo GROUP BY v.id_photo The databases: CREATE TABLE IF NOT EXISTS `photos` ( `id` int(10) NOT NULL AUTO_INCREMENT, `photo_filename` varchar(50) NOT NULL, `photo_camera` varchar(150) NOT NULL, `photo_taken` datetime NOT NULL, `photo_resolution` varchar(10) NOT NULL, `photo_exposure` varchar(10) NOT NULL, `photo_iso` varchar(3) NOT NULL, `photo_fnumber` varchar(10) NOT NULL, `photo_focallength` varchar(10) NOT NULL, `post_coordinates` text NOT NULL, `post_description` text NOT NULL, `post_uploaded` datetime NOT NULL, `post_edited` datetime NOT NULL, `checkbox_approxcoor` enum('0','1') NOT NULL DEFAULT '0', PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) CREATE TABLE IF NOT EXISTS `viewers` ( `id` int(10) NOT NULL AUTO_INCREMENT, `id_photo` int(10) DEFAULT '0', `ipaddress` text NOT NULL, `date_viewed` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `id` (`id`) ) The data in viewers looks like this: (1, 85, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'), (2, 84, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'), (3, 85, '3892a0ab97d6ff325f285b27b847070f', '2012-06-21 22:49:25'); One single row from the database for photos to understand how the rows looks like in this database: (85, 'P1170986.JPG', 'Panasonic DMC-LX3', '2012-06-19 18:00:40', '3968x2232', '10/8000', '80', '50/10', '51/10', '', '', '2012-06-19 18:45:17', '0000-00-00 00:00:00', '0') At the moment the SQL only prints the photo with ID 84. In this case it's wrong - it should print out the photo with ID 85. How can I fix this problem? Thanks in advance.

    Read the article

  • Cant fetch production db results using Google app engine remote_api

    - by Alon
    Hey, im trying to work out with /remote_api with a django-patch app engine app i got running. i want to select a few rows from my online production app locally. i cant seem to manage todo so, everything authenticates fine, it doesnt breaks on imports, but when i try to fetch something it just doesnt print anything. Placed the test python inside my local app dir. #!/usr/bin/env python # import os import sys # Hardwire in appengine modules to PYTHONPATH # or use wrapper to do it more elegantly appengine_dirs = ['myworkingpath'] sys.path.extend(appengine_dirs) # Add your models to path my_root_dir = os.path.abspath(os.path.dirname(__file__)) sys.path.insert(0, my_root_dir) from google.appengine.ext import db from google.appengine.ext.remote_api import remote_api_stub import getpass APP_NAME = 'Myappname' os.environ['AUTH_DOMAIN'] = 'gmail.com' os.environ['USER_EMAIL'] = '[email protected]' def auth_func(): return (raw_input('Username:'), getpass.getpass('Password:')) # Use local dev server by passing in as parameter: # servername='localhost:8080' # Otherwise, remote_api assumes you are targeting APP_NAME.appspot.com remote_api_stub.ConfigureRemoteDatastore(APP_NAME, '/remote_api', auth_func) # Do stuff like your code was running on App Engine from channel.models import Channel, Channel2Operator myresults = mymodel.all().fetch(10) for result in myresults: print result.key() it doesnt give any error or print anything. so does the remote_api console example google got. when i print the myresults i get [].

    Read the article

  • Best Practices for Sanitizing SQL inputs Using JavaScript?

    - by Greg Bulmash
    So, with HTML5 giving us local SQL databases on the client side, if you want to write a select or insert, you no longer have the ability to sanitize third party input by saying $buddski = mysql_real_escape_string($tuddski) because the PHP parser and MySQL bridge are far away. It's a whole new world of SQLite where you compose your queries and parse your results with JavaScript. But while you may not have your whole site's database go down, the user who gets his/her database corrupted or wiped due to a malicious injection attack is going to be rather upset. So, what's the best way, in pure JavaScript, to escape/sanitize your inputs so they will not wreak havoc with your user's built-in database? Scriptlets? specifications? Anyone?

    Read the article

  • How to mimic SMPP connection

    - by vijay.shad
    Hi, I have a situation. My application is sending multiple SMSes to end client this number varies from 50000 to 100000 SMSes at any point of time. To achieve this functionality i am using Kannel as sms sender interface. So My solution is complete. But only in development environment! Before going to be in production I supposed to do a testing of this solution environment. Is there any suggestion to created a testing environment? To give more input to my environment; kannel is using an smpp connection to deliver messages to end client. So, I think i need to mimic some smpp server for kannel.

    Read the article

  • Can we view objects in the JVM memory?

    - by Sebastien Lorber
    Hey, At work we found that on some instances (particulary the slow ones) we have a different behaviour, acquired at the reboot. We guess a cache is not initialized correctly, or maybe a concurrency problem... Anyway it's not reproductible in any other env than production. We actually don't have loggers to activate... it's an old component... Thus i'd like to know if there are tools that can help us to see the different objets present in the JVM memory in order to check the content of the cache... Thank you!

    Read the article

  • How do you deploy your SharePoint solutions?

    - by Lars Mæhlum
    I am now in the process of planning the deployment of a SharePoint solution into a production environment. I have read about some tools that promise an easy way to automate this process, but nothing that seems to fit my scenario. In the testing phase I have used SharePoint Designer to copy site content between the different development and testing servers, but this process is manual and it seems a bit unnecessary. The site is made up of SharePoint web part pages with custom web parts, and a lot of Reporting Services report definitions. So, is there any good advice out there in this vast land of geeks on how to most efficiently create and deploy a SharePoint site for a multiple deployment scenario? Edit Just to clarify. I need to deploy several "SharePoint Sites" into an existing site collection. Since SharePoint likes to have its sites in the SharePoint content database, just putting the files into IIS is not an option at this time.

    Read the article

  • Can we view objets in the JVM memory?

    - by Sebastien Lorber
    Hey, At work we found that on some instances (particulary the slow ones) we have a different behaviour, acquired at the reboot. We guess a cache is not initialized correctly, or maybe a concurrency problem... Anyway it's not reproductible in any other env than production. We actually don't have loggers to activate... it's an old component... Thus i'd like to know if there are tools that can help us to see the different objets present in the JVM memory in order to check the content of the cache... Thank you!

    Read the article

  • Odd ActiveRecord model dynamic initialization bug in production

    - by qfinder
    I've got an ActiveRecord (2.3.5) model that occasionally exhibits incorrect behavior that appears to be related to a problem in its dynamic initialization. Here's the code: class Widget < ActiveRecord::Base extend ActiveSupport::Memoizable serialize :settings VALID_SETTINGS = %w(show_on_sale show_upcoming show_current show_past) VALID_SETTINGS.each do |setting| class_eval %{ def #{setting}=(val); self.settings[:#{setting}] = (val == "1"); end def #{setting}; self.settings[:#{setting}]; end } end def initialize_settings self.settings ||= { :show_on_sale => true, :show_upcoming => true } end after_initialize :initialize_settings # All the other stuff the model does end The idea was to use a single record field (settings) to persist a bunch of configuration data for this object, but allow all the settings to seamlessly work with form helpers and the like. (Why this approach makes sense here is a little out of scope, but let's assume that it does.) Net-net, Widget should end up with instance methods (eg #show_on_sale= #show_on_sale) for all the entires in the VALID_SETTINGS array. Any default values should be specified in initialize_settings. And indeed this works, mostly. In dev and staging, no problems at all. But in production, the app sometimes ends up in a state where a) any writes to the dynamically generated setters fail and b) none of the default values appear to be set - although my leading theory is that the dynamically generated reader methods are just broken. The code, db, and environment is otherwise identical between the three. A typical error message / backtrace on the fail looks like: IndexError: index 141145 out of string (eval):2:in []=' (eval):2:inshow_on_sale=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:in send' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:in each' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2634:in `update_attributes!' ...(then controller and all the way down) Ideas or theories as to what might be going on? My leading theory is that something is going wrong in instance initialization wherein the class instance variable settings is ending up as a string rather than a hash. This explains both the above setter failure (:show_on_sale is being used to index into the string) and the fact that getters don't work (an out of bounds [] call on a string just returns nil). But then how and why might settings occasionally end up as a string rather than hash?

    Read the article

  • Do Websites need Local Databases Anymore?

    - by viatropos
    If there's a better place to ask this, please let me know. Every time I build a new website/blog/shopping-cart/etc., I keep trying to do the following: Extract out common functionality into reusable code (Rubygems and jQuery plugins mostly) If possible, convert that gem into a small service so I never have to deal with a database for the objects involved (by service, I mean something lean and mean, usually built with the Sinatra Web Framework with a few core models). My assumption is, if I can remove dependencies on local databases, that will make it easier and more scalable in the long run (scalable in terms of reusability and manageability, not necessarily database/performance). I'm not sure if that's a good or bad assumption yet. What do you think? I've made this assumption because of the following reason: Most serious database/model functionality has been built on the internet somewhere. Just to name a few: Social Network API: Facebook Messaging API: Twitter Mailing API: Google Event API: Eventbrite Shopping API: Shopify Comment API: Disqus Form API: Wufoo Image API: Picasa Video API: Youtube ... Each of those things are fairly complicated to build from scratch and to make as optimized, simple, and easy to use as those companies have made them. So if I build an app that shows pictures (picasa) on an Event page (eventbrite), and you can see who joined the event (facebook events), and send them emails (google apps api), and have them fill out monthly surveys (wufoo), and watch a video when they're done (youtube), all integrated into a custom, easy to use website, and I can do that without ever creating a local database, is that a good thing? I ask because there's two things missing from the puzzle that keep forcing me to create that local database: Post API RESTful/Pretty Url API While there's plenty of Blogging systems and APIs for them, there is no one place where you can just write content and have it part of some massive thing. For every app, I have to use code for creating pretty/restful urls, and that saves posts. But it seems like that should be a service! Question is, is that what the website is? ...That place to integrate the worlds services for my specific cause... and, sigh, to store posts that only my site has access to. Will everyone always need "their own blog"? Why not just have a profile and write lots of content on an established platform like StackOverflow or Facebook? ... That way I can write apps entirely without a database and know that I'm doing it right. Note: Of course at some point you'd need a database, if you were doing something unique or new. But for the case where you're just rewiring information or creating things like videos, events, and products, is it really necessary anymore??

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >