Search Results

Search found 7957 results on 319 pages for 'production databases'.

Page 271/319 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • Advanced tasks using Web.Config transformation

    - by dcadenas
    Does anyone know if there is a way to "transform" specific sections of values instead of replacing the whole value or an attribute? For example, I've got several appSettings entries that specify the Urls for different webservices. These entries are slightly different in the dev environment than the production environment. Some are less trivial than others <!-- DEV ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.dev.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ma1-lab.lab1.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> <!-- PROD ENTRY --> <appSettings> <add key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> </appSettings> So far, I know I can do something like this in the Web.Release.Config: <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName1_WebsService_Url" value="http://wsServiceName1.prod.domain.com/v1.2.3.4/entryPoint.asmx" /> <add xdt:Locator="Match(key)" xdt:Transform="SetAttributes(value)" key="serviceName2_WebsService_Url" value="http://ws.ServiceName2.domain.com/v1.2.3.4/entryPoint.asmx" /> However, everytime the Version for that webservice is updated, I would have to update the Web.Release.Config as well, which defeats the purpose of simplfying my web.config updates. I know I could also split that URL into different sections and update them independently, but I rather have it all in one key. I've looked through the available web.config Transforms but nothings seems to be geared towars what I am trying to accomplish. These are the websites I am using as a reference: Vishal Joshi's blog, MSDN Help, and Channel9 video Any help would be much appreciated! -D

    Read the article

  • Appropriate SQL Server Permissions for Developers

    - by BJ Safdie
    After a couple of Google searches and a quick look at questions here, I cannot seem to find what I thought would be a cookbook answer for SQL Server permissions. As I often see in small shops, most developers here were using an admin account for SQL Server while developing. I want to set up roles and permissions that I can assign to developers so that we can get our jobs done, but also do so with the minimum permissions required. Can anyone offer advice on what SQL Server permissions to assign? Components: SQL Server 2008 SQL Server Reporting Services (SSRS) 2008 SQL Server Integration Services (SSIS) 2008 Platforms: Production Staging/QA Development/Integration We are running "Mixed Mode" security because of some legacy apps and networks, but are moving to Windows Auth. I am not sure if that really affects the role set up. I plan to set up access for Developers to Prod and Staging/QA DBs as Read-Only. However, I still want developers to retain the ability to run Profiling. We need Deployment accounts with higher privilege levels. We are currently trying to figure out exactly what privileges we need for SSIS package deployments. Within the Development Server, Developers need broad privileges. However, I am not sure that just making them all admins is really the best choice. It's hard to believe that no one has published a decent example script that sets up these kinds of roles with a good set of appropriate permissions for developers and deployers. We can probably figure this all out by locking things down and then adding permissions as we discover the need, but that will be way too big a PITA for everyone. Can anyone point me to, or provide, a good exemplar for permissions for these kinds of roles on these kinds of platforms?

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • Selecting a whole database over an individual table to output to file

    - by Daniel Wrigley
    :::::::: EDIT :::::::: New code for people to have a look at, one question I have with this is where do I set were the *.gz file is saved? $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; system($command); Also why the hell can you not reply yo your own post with answering it? :( :::::::: EDIT :::::::: Ok Im having trouble finding out how to select a full database for backup as an *.sql file rather than only an individual table. On the localhost I have several databases with one named "foo" and it is that which I want to backup and not any of the individual tables inside the database "foo". The code to connect to the database; //Database Information $dbhost = "localhost"; $dbname = "foo"; $dbuser = "bar"; $dbpass = "rulz"; //Connect to database mysql_connect ($dbhost, $dbuser, $dbpass) or die("Could not connect: ".mysql_error()); mysql_select_db($dbname) or die(mysql_error()); The code to backup the database; // Grab the time to know when this post was submitted $time = date('Y-m-d-H-i-s'); $tableName = 'foo'; $backupFile = '/sql/backup/'. $time .'.sql'; $query = "SELECT * INTO OUTFILE '". $backupFile ."' FROM ". $tableName .""; $result = mysql_query($query)or die("Database query died: " . mysql_error()); My brain is hurting near to the end of the day so no doubts i've missed something out very obvious. Thanks in advance to anyone helping me out.

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • Long running transactions with Spring and Hibernate?

    - by jimbokun
    The underlying problem I want to solve is running a task that generates several temporary tables in MySQL, which need to stay around long enough to fetch results from Java after they are created. Because of the size of the data involved, the task must be completed in batches. Each batch is a call to a stored procedure called through JDBC. The entire process can take half an hour or more for a large data set. To ensure access to the temporary tables, I run the entire task, start to finish, in a single Spring transaction with a TransactionCallbackWithoutResult. Otherwise, I could get a different connection that does not have access to the temporary tables (this would happen occasionally before I wrapped everything in a transaction). This worked fine in my development environment. However, in production I got the following exception: java.sql.SQLException: Lock wait timeout exceeded; try restarting transaction This happened when a different task tried to access some of the same tables during the execution of my long running transaction. What confuses me is that the long running transaction only inserts or updates into temporary tables. All access to non-temporary tables are selects only. From what documentation I can find, the default Spring transaction isolation level should not cause MySQL to block in this case. So my first question, is this the right approach? Can I ensure that I repeatedly get the same connection through a Hibernate template without a long running transaction? If the long running transaction approach is the correct one, what should I check in terms of isolation levels? Is my understanding correct that the default isolation level in Spring/MySQL transactions should not lock tables that are only accessed through selects? What can I do to debug which tables are causing the conflict, and prevent those tables from being locked by the transaction?

    Read the article

  • Zend Framework: Zend_DB Error

    - by Sergio E.
    I'm trying to learn ZF, but got strange error after 20 minutes :) Fatal error: Uncaught exception 'Zend_Db_Adapter_Exception' with message 'Configuration array must have a key for 'dbname' that names the database instance' What does this error mean? I got DB information in my config file: resources.db.adapter=pdo_mysql resources.db.host=localhost resources.db.username=name resources.db.password=pass resources.db.dbname=name Any suggestions? EDIT: This is my model file /app/models/DbTable/Bands.php: class Model_DbTable_Bands extends Zend_Db_Table_Abstract { protected $_name = 'zend_bands'; } Index controller action: public function indexAction() { $albums = new Model_DbTable_Bands(); $this->view->albums = $albums->fetchAll(); } EDIT All codes: bootstrap.php protected function _initAutoload() { $autoloader = new Zend_Application_Module_Autoloader(array( 'namespace' => '', 'basePath' => dirname(__FILE__), )); return $autoloader; } protected function _initDoctype() { $this->bootstrap('view'); $view = $this->getResource('view'); $view->doctype('XHTML1_STRICT'); } public static function setupDatabase() { $config = self::$registry->configuration; $db = Zend_Db::factory($config->db); $db->query("SET NAMES 'utf8'"); self::$registry->database = $db; Zend_Db_Table::setDefaultAdapter($db); Zend_Db_Table_Abstract::setDefaultAdapter($db); } IndexController.php class IndexController extends Zend_Controller_Action { public function init() { /* Initialize action controller here */ } public function indexAction() { $albums = new Model_DbTable_Bands(); $this->view->albums = $albums->fetchAll(); } } configs/application.ini, changed database and provided password: [development : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 db.adapter = PDO_MYSQL db.params.host = localhost db.params.username = root db.params.password = pedro db.params.dbname = test models/DbTable/Bands.php class Model_DbTable_Bands extends Zend_Db_Table_Abstract { protected $_name = 'cakephp_bands'; public function getAlbum($id) { $id = (int)$id; $row = $this->fetchRow('id = ' . $id); if (!$row) { throw new Exception("Count not find row $id"); } return $row->toArray(); } }

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • Poor DB4O performance - What am I doing wrong?

    - by Jon
    I read Rob Conery's post about Object databases and thought I'd give it a go. I have a class lets say Book with 15 properties mostly string, some int, one Datetime and 2 decimals. I used Rob's code to open the database file etc to test on a Winforms project not a Web project. So I did the following: for(int i = 0; i < 1000000; i++) { var Book = new Book(); //Assign variables s.Save(Book); } s.CommitChanges(); This took at least a couple of minutes. I then tried to retrieve all data to test speed: var spp = s.All<Passports>(); This one line was quick, however, then doing: sppp.Count() as well as a seperate test doing a foreach loop and incrementing a counter took a couple of minutes. This seems quite slow to me. Am I doing something wrong or am I expecting too much?

    Read the article

  • Javascript library: to obfuscate or not to obfuscate - that is the question

    - by morpheous
    I need to write a GUI related javascript library. It will give my website a bit of an edge (in terms of functionality I can offer) - up until my competitors play with it long enough to figure out how to write it by themselves. I can accept the fact that it will be emulated over time - thats par for the course (its part of business). However, what I cannot bear, is the idea of effectively, simply handing over all the hard work that would have gone into the library to my competitors, by using plain javascript that anyone can download and use. It is an established fact that no none in the industry I am "attacking" has this functionality, so the value of such a library is undeniable and is not up for discussion (i.e. thats not what I'm asking here). What I am seeking to find out are the pros and cons of obfuscating a javascript library, so that I can come to a final decision. Two of my biggest concerns are debugging, and subtle errors that may be introduced by the obfuscator. I would like to know: How can I manage those risks (being able to debug faulty code, ensuring/minimizing against obfuscation errors) Are there any good quality industry standard obfuscators you can recommend (preferably something you use yourself). What are your experiences of using obfuscated code in a production environment?

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • Advanced All In One .NET Framework

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... Any ideas ? I know its lot but i really would like to use existing code to build upon so i can focus on business rules. Could splitting the requirements on 3 or 4 existing open source framework be possible ? Do u have any suggestion to add to the list before starting ? What open source tools would u use to achieve these ?

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • One config file for multiple environments

    - by ho
    I'm currently working with systems that has quite a lot of configuration settings that are environment specific (Dev, UAT, Production). Does anyone have any suggestions for minimizing the changes needed to the config file when moving between environments as well as minimizing the duplication of data in the config file? It's mostly Application settings rather than User settings. The way I'm doing it at the moment is something similar to this: <DevConnectionString>xyz</DevConnectionString> <DevInboundPath>xyz</DevInboundPath> <DevProcessedPath>xyz</DevProcessedPath> <UatConnectionString>xyz</UatConnectionString> <UatInboundPath>xyz</UatInboundPath> <UatProcessedPath>xyz</UatProcessedPath> ... <Environment>Dev</Environment> And then I have a class that reads in the Environment setting via the My.Settings class (it's VB project) and then uses that to decide what other settings to retrieve. This leads to too much duplication though so I'm not sure if it's worth it.

    Read the article

  • Max Daily Budget exceeded and Billing Status "Changing Daily Budget"

    - by draftpik
    We've exceeded the Max Daily Budget for our app, but we can't increase the budget due to a serious flaw in Google's billing system. Google App Engine and Google Wallet do not have very capable support for multiple sign-in. As a result, I went to change the budget, but it used the wrong Google Wallet account (a different Google Account I was signed in as). I had to go back and try again, but now our GAE app shows the following status: Billing Status: Changing Daily Budget Your account has been locked while we process your budget changes. If you were redirected to Google Checkout but did not complete the process, your settings will remain unchanged. (You will be able to make changes to your budget settings again once the outstanding payment is processed.) Now I'm completely prevented from making any billing changes, our app is shut off (over quota), and I have NOTHING I can do to fix it. This is a seriously fundamental flaw in App Engine's billing system and Google Wallet integration. Has anyone run into this before? Is there a workaround anyone is aware of? Right now, our production app is completely down thanks to this issue. Any help you can offer would be greatly appreciated? If you're from Google and you might be able to help on the backend, our app id is "nhldraftpik". Thanks! Brian

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >