Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 612/1255 | < Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >

  • Is a class that is hard to unit test badly designed?

    - by Extrakun
    I am now doing unit testing on an application which was written over the year, before I started to do unit-testing diligently. I realized that the classes I wrote are hard to unit test, for the following reasons: Relies on loading data from database. Which means I have to setup a row in the table just to run the unit test (and I am not testing database capabilities). Requires a lot of other external classes just to get the class I am testing to its initial state. On the whole, there don't seem to be anything wrong with the design except that it is too tightly coupled (which by itself is a bad thing). I figure that if I have written automated test cases with each of the class, hence ensuring that I don't heap extra dependencies or coupling for the class to work, the class might be better designed. Does this reason holds water? What are your experiences?

    Read the article

  • How can you configure or extend BITS (Background Intelligent Transfer Service) to read files from a

    - by Mark
    I have a ASP .NET load balanced application (webservice and website). It runs on SQL server. I need to be able to provide large files for download. However, because of the load balancing situation, the files are stored in the SQL database as opposed to the file system. BITS seems to be the best approach. I have full control of the client. However, i don't know how to configure BITS to read the file from the database. I know how to write the C# code for that, but i don't know how to get BITS to hook into it as opposed to reading the file from the file system. Any ideas?

    Read the article

  • Zend_Session: unserialize session data

    - by takeshin
    I'm using session SaveHandler to persist session data in the database. Sample session_data column from the database: Messenger|a:1:{s:13:"page_messages";a:0:{}}userSession|a:1:{s:7:"referer";s:32:"http://cms.dev/user/profile/view";}Zend_Auth|a:1:{s:7:"storage";O:19:"User_Model_Identity":3:{s:2:"id";s:1:"1";s:8:"username";s:13:"administrator";s:4:"slug";s:13:"administrator";}} I want to delete Zend_Auth object from this session data. How can I unserialize those objects and remove object I need? I suspect, that I don't have to write my custom parser, that Zend_Session already has a method to do this. I have tried different combinations of unserialize but it still returns false. I'm using autoloader from ZF 1.10.2 and Doctrine 1.2

    Read the article

  • Sign Up/Login system for multi-domain/multi-server

    - by David
    I'm thinking about a good solution for implementing a sign up/login system that works across different domains and servers. A working example is Olx (you can register in one domain, and your login will work in the rest of domains). The scenario is that every domain (one per country) has its own database. And there will be 2 servers (for example), each one will have the 50% of the domains (and so the 50% of databases). What would you suggest to start with? Database: MySQL 5.1 Server-side language: PHP 5.3 (I will be using Symfony 1.4, so if someone has some suggestion for this framework, it will be interesting too, although it is optional)

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • SQL Server Import table keeping default values

    - by Chrissi
    I am importing a table from one database to another in SQL Server 2008 by right-clicking the target database and choosing Tasks Import Data... When I import the table I get the column names and types and all the data fine, but I lose the primary key, identity specifications and all the default values that were set in the source table. So now I have to set all the default values for each column again manually. Is there any way to get the default values with the import, or even after with a Query? I am VERY new to this and flailing in the dark, so forgive me if this is a really stupid question...

    Read the article

  • Does the Frontier Kernel have a future?

    - by pbreitenbach
    Whatever you think about Dave Winer, Frontier is an incredible piece of software. It includes quite a few advances that have yet to be surpassed: the object database, the database viewer, the scripting environment, the hierarchical-including website generation scheme, the elegant scripting language, the mixing of scripts and compilation, rapid prototyping, built in web server, simple debugger, cross-platform, simple UI, etc. My question: Dave turned Frontier over to open source and there is a Frontier Kernel project. However it is fairly quiet. Does Frontier have a future from here?

    Read the article

  • How can i get notifications from server to android device?

    - by Vaibs
    I am creating a app wherein user shares some information. This data i am storing it in database through servlets i.e. i am calling my own servlets which will take data through url and store it in database. So i want other users of that same app to get notify that some information is available and in turn they will get the information that other user has updated. For this to work we can use polling or pusher. But polling will take lots of battery power. I have tried C2DM but its not working for me. So i am thinking for some other mechanism by which i can implement other than C2DM. Please suggest some way to work it. and e.g. if u have came across.

    Read the article

  • 'Out of Memory exception' in sql server 2005 xml column

    - by Raghuraman
    Hi All, I am devloping a windows forms application and am using sql server 2005 database as my backend. I am having an xml column in my database. I am using ultrawingrid control in my application.I obtain the xml of the dataset which is bound to my ultrawingrid control and pass that as a parameter value to the stored procedure where am inserting this value into the xml column which I specified. The columns in my grid are dynamic and hence there can be any no of columns in my grid. I got 'out of memory' exception in the dataset.GetXml() statement since there were more no of columns I believe.So, what I did is that I used dataset.WriteXml() method and stored all the xml contents into an xml file, loaded the xml file into the XmlDocument object and then passed the xmlnodereader as the value to the stored procedure parameter.Now, while executing the stored procedure am getting the same 'out of memory' exception. How could I resolve this issue?

    Read the article

  • Sending BLOBs in a JSON service,... how?

    - by Marten Sytema
    Hello I have a webservice (ie. servlet) implemented in Java. It gets some data from a MySQL table, with one column being of type BLOB (an image), and some other columns are just plain text. Normally I would store the file outside the database with a pointer to it in the database, but due to circumstance I now have to use this BLOB column... What is the proper way to send this? How to encode the image in a JSONObject, and how to parse (and RENDER!) it on the otherside ? I want to use JSONP, to avoid having to proxy it through the consumer's webserver. So that the consumer can just put in a tag pointing to the webservice, calling a callback. Any thoughts how to handle images in this situation? Also thoughts on performance etc. are interesting!

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • How do I hide data/field in a List in J2ME

    - by pi
    Hi, I am working on a J2ME app which has dynamically generated Lists. Items on this list may be consequently selected, and the selection processed within the commandAction block. Is there a way to have the IDs of variables populating the List (from a remote database) included in the List item definition as in: this.append("A", null); this.append("B", null); or: String[] arrayOfValues = {"A", "B"}; new List("Menu", List.IMPLICIT, arrayOfValues, null); such that when an item is selected, perhaps A, I can also have its database ID for perhaps further processing. Is it possible to hide a field/data? Thanks.

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • Working with hibernate/DAO problems

    - by Gandalf StormCrow
    Hello everyone here is my DAO class : public class UsersDAO extends HibernateDaoSupport { private static final Log log = LogFactory.getLog(UsersDAO.class); protected void initDao() { //do nothing } public void save(User transientInstance) { log.debug("saving Users instance"); try { getHibernateTemplate().saveOrUpdate(transientInstance); log.debug("save successful"); } catch (RuntimeException re) { log.error("save failed", re); throw re; } } public void update(User transientInstance) { log.debug("updating User instance"); try { getHibernateTemplate().update(transientInstance); log.debug("update successful"); } catch (RuntimeException re) { log.error("update failed", re); throw re; } } public void delete(User persistentInstance) { log.debug("deleting Users instance"); try { getHibernateTemplate().delete(persistentInstance); log.debug("delete successful"); } catch (RuntimeException re) { log.error("delete failed", re); throw re; } } public User findById( java.lang.Integer id) { log.debug("getting Users instance with id: " + id); try { User instance = (User) getHibernateTemplate() .get("project.hibernate.Users", id); return instance; } catch (RuntimeException re) { log.error("get failed", re); throw re; } } } Now I wrote a test class(not a junit test) to test is everything working, my user has these fields in the database : userID which is 5characters long string and unique/primary key, and fields such as address, dob etc(total 15 columns in database table). Now in my test class I intanciated User added the values like : User user = new User; user.setAddress("some address"); and so I did for all 15 fields, than at the end of assigning data to User object I called in DAO to save that to database UsersDao.save(user); and save works just perfectly. My question is how do I update/delete users using the same logic? Fox example I tried this(to delete user from table users): User user = new User; user.setUserID("1s54f"); // which is unique key for users no two keys are the same UsersDao.delete(user); I wanted to delete user with this key but its obviously different can someone explain please how to do these. thank you

    Read the article

  • Want to build simple SQL admin interface to change a few values in a table.

    - by Adam McC
    i am currently building a system in MSSQL 2K5. i have a table that holds information about certain insurance schemes such as overheads and other things. these values will change occasionally and currently i administer the database straight through the management Studio. i would like to build a simple interface that will allow my colleagues to change these values by selecting the company in a dropdown and the current values will populate. they can then edit these values and submit them to the database. is this possible in the current Visual Studio supplied with MSSQL server 2K5 or do i need to get another product. i am confident that with the help of stack overflow and google i can build this myself, but i need pointed in the right direction as to which environment would be easiest and best to start building it. Many thanks, adam

    Read the article

  • Using protocol buffers for a comprehensive data strategy for Windows Mobile devices

    - by Steve
    I have started reading some of the posts related to protocol buffers. The serialization method seems very appropriate for the transfer of data to and from web servers. Has anyone considered using a method like this to save and retrieve data on the mobile device itself? (i.e. a replacement for a traditional database / orm layer) Where would the data be persisted? How would the data be queried? Would it make sense to store the data in a traditional database (SqlCE or SqlLite) with a few "searchable" columns and then one column for the serialized data? Thoughts? Am I out on a limb here? Thank you!

    Read the article

  • MySQL ALTER TABLE on very large table - is it safe to run it?

    - by Timothy Mifsud
    I have a MySQL database with one particular MyISAM table of above 4 million rows. I update this table about once a week with about 2000 new rows. After updating, I then perform the following statement: ALTER TABLE x ORDER BY PK DESC i.e. I order the table in question by the primary key field in descending order. This has not given me any problems on my development machine (Windows with 3GB memory), but, even though 3 times I have tried it successfully on the production Linux server (with 512MB RAM - and achieving the resulted sorted table in about 6 minutes each time), the last time I tried it I had to stop the query after about 30 minutes and rebuild the database from a backup. I have started to wonder whether a 512MB server can cope with that statement (on such a large table) as I have read that a temporary table is created to perform the ALTER TABLE command?! And, if it can be safely run, what should be the expected time for the alteration of the table? Thanks in advance, Tim

    Read the article

  • Creating a triple store query using SQL - how to find all triples that have a common predicate and o

    - by Ankur
    I have a database that acts like a triple store, except that it is just a simple MySQL database. I want to select all triples that have a common predicate and object. Info about RDF and triples I can't seem to work out the SQL. If I had just a single predicate and object to match I would do: select TRIPLE from TRIPLES where PREDICATE="predicateName" and OBJECT="objectName" But if I have a list (HashMap) of many pairs of (predicateName,objectName) I am not sure what I need to do. Please let me know if I need to provide more info, I am not sure that I have made this quite clear, but I am wary of providing too much info and confusing the issue.

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • how to run foreach loop with ternary condition

    - by I Like PHP
    i have two foreach loop to display some data, but i want to use a single foreach on basis of database result. means if there is any row returns from database then forach($first as $fk=>$fv) should execute otherwise foreach($other as $ok) should execute. i m unsing below ternary operator which gives parse error $n=$db->numRows($taskData); // databsse results <?php ($n) ? foreach ($first as $fk=>$fv) : foreach ($other as $ok) { ?> <table><tr><td>......some data...</td></tr></table> <?php } ?> please suggest me how to handle such condition via ternary operator or any other idea. Thanks

    Read the article

  • How to use a variable to specify filegroup in MSSQL

    - by gt
    I want to alter a table to add a constraint during upgrade on a MSSQL database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • SQL Server - Error when trying to reference a .mdf file

    - by Amokrane
    Hi, For a NUnit test I need to reference a .mdf file from a .config file. Unfortunately, I get the following error message: The FOR ATTACH option requires that at least the primary file be specified. An attempt to attach an auto-named database for file C:\....\*.mdf A database with the same name exists, or specified file cannot be opened, or it is located on UNC share. I looked for this error on google but didn't find anything that helped me solve my problem. Any idea? Thank you

    Read the article

  • Querying datatable to get rows with a certain value in the first column

    - by user1776590
    I am currently using the code below and am wondering if it would be possible to query based on an entry value. At the moment it returns the number of rows that have been defined. var q = sqlData.AsEnumerable().Take(2); This data comes in from a database and is imputed into the table but at the moment it only returns database data into the datatable and allows me to select the first two rows and I was wondering if I can query the data table so that I can get the rows that I require based on an index in the actual table itself (e.g. in the table I find five rows and query this information out).

    Read the article

  • Serializing ActiveRecord objects without storing their attributes?

    - by Allan Grant
    I'm working on a problem where I need to store serialized hierarchies of Ruby objects in the database. Many of the objects that will need to be saved are ActiveRecord objects with a lot of attributes. Instead of saving the entire objects and then refreshing their attributes from the DB when I load them (in case they changed, which is likely), it would be easier to just store the references (class and database id) for these objects. Does anyone know if there's already a way to do this in Rails, or if there's an existing gem for it? Wanted to check if something existed before spending a ton of time hacking on it.

    Read the article

  • How to organise a php based website

    - by bsandrabr
    I am putting my php /mysql website up and this is my scenario The users are grouped into sites each site with their own unique database. There will be about 40 users per site. the two options I'm trying to decide between are have a central website running the php and directing the users off to their own database using sub domains for each user each with their own php in htdocs I dont even know if 2 is possible/stupid but if it was, would it make any difference to performance as they're all being run by the same server. Any other ideas/ advice much appreciated as I want to organise it the best way from the start

    Read the article

< Previous Page | 608 609 610 611 612 613 614 615 616 617 618 619  | Next Page >