Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 166/189 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • What are the options for overriding Django's cascading delete behaviour?

    - by Tom
    Django models generally handle the ON DELETE CASCADE behaviour quite adequately (in a way that works on databases that don't support it natively.) However, I'm struggling to discover what is the best way to override this behaviour where it is not appropriate, in the following scenarios for example: ON DELETE RESTRICT (i.e. prevent deleting an object if it has child records) ON DELETE SET NULL (i.e. don't delete a child record, but set it's parent key to NULL instead to break the relationship) Update other related data when a record is deleted (e.g. deleting an uploaded image file) The following are the potential ways to achieve these that I am aware of: Override the model's delete() method. While this sort of works, it is sidestepped when the records are deleted via a QuerySet. Also, every model's delete() must be overridden to make sure Django's code is never called and super() can't be called as it may use a QuerySet to delete child objects. Use signals. This seems to be ideal as they are called when directly deleting the model or deleting via a QuerySet. However, there is no possibility to prevent a child object from being deleted so it is not usable to implement ON CASCADE RESTRICT or SET NULL. Use a database engine that handles this properly (what does Django do in this case?) Wait until Django supports it (and live with bugs until then...) It seems like the first option is the only viable one, but it's ugly, throws the baby out with the bath water, and risks missing something when a new model/relation is added. Am I missing something? Any recommendations?

    Read the article

  • Atomic int writes on file

    - by Waneck
    Hello! I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers to the new address. How is the best way to do it? I was thinking about maybe putting a flag before the address, that when it's set, the readers will use a spin lock until it's released. But I'm afraid that it isn't at all atomic, is it? e.g. a reader reads the flag, and it is unset on the same time, a writer writes the flag and changes the value of the int the reader may read an inconsistent value! I'm looking for locking techniques but all I find is either for thread locking techniques, or to lock an entire file, not fields. Is it not possible to do this? How do append-only databases handle this? Thanks! Cauê

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • How to create refresh statements for TableAdapter objects in Visual Studio?

    - by Mark Wilkins
    I am working on developing an ADO.NET data provider and an associated DDEX provider. I am unable to convince the Visual Studio TableAdapater Configuration Wizard to generate SQL statements to refresh the data table after inserts and updates. It generates the insert and delete statements but will not produce the select statements to do the refresh. The functionality referred to can be accessed by dropping a table from the Server Explorer (inside Visual Studio) onto a DataSet (e.g., DataSet1.xsd). It creates a TableAdapter object and configures SELECT, UPDATE, DELETE, and INSERT statements. If you right click on the TableAdapter object, the context menu has a “Configure” option that starts the “TableAdapter Configuration Wizard”. The first dialog of that wizard has an Advanced Options button, which leads to an option titled “Refresh the data table”. When used with SQL Server tables, that option causes a statement of the form “select field1, field2, …” to be added on to the end of the commands for the TableAdapter’s InsertCommand and UpdateCommand. Do you have any idea what type property or interface might need to be exposed from the DDEX provider (or maybe the ADO.NET data provider) in order to make Visual Studio add those refresh statements to the update/insert commands? The MSDN documentation for the Advanced SQL Generation Options Dialog Box has a note stating, “Refreshing the data table is only supported on databases that support batching of SQL statements.” This seems to imply that a .NET data provider might need to expose some property indicating such behavior is supported. But I cannot find it. Any ideas?

    Read the article

  • How to make safe frequent DataSource switches for AbstractRoutingDataSource?

    - by serg555
    I implemented Dynamic DataSource Routing for Spring+Hibernate according to this article. I have several databases with same structure and I need to select which db will run each specific query. Everything works fine on localhost, but I am worrying about how this will hold up in real web site environment. They are using some static context holder to determine which datasource to use: public class CustomerContextHolder { private static final ThreadLocal<CustomerType> contextHolder = new ThreadLocal<CustomerType>(); public static void setCustomerType(CustomerType customerType) { Assert.notNull(customerType, "customerType cannot be null"); contextHolder.set(customerType); } public static CustomerType getCustomerType() { return (CustomerType) contextHolder.get(); } public static void clearCustomerType() { contextHolder.remove(); } } It is wrapped inside some ThreadLocal container, but what exactly does that mean? What will happen when two web requests call this piece of code in parallel: CustomerContextHolder.setCustomerType(CustomerType.GOLD); //<another user will switch customer type here to CustomerType.SILVER in another request> List<Item> goldItems = catalog.getItems(); Is every web request wrapped into its own thread in Spring MVC? Will CustomerContextHolder.setCustomerType() changes be visible to other web users? My controllers have synchronizeOnSession=true. How to make sure that nobody else will switch datasource until I run required query for current user? Thanks.

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

  • MySQL FULLTEXT not working

    - by Ross
    I'm attempting to add searching support for my PHP web app using MySQL's FULLTEXT indexes. I created a test table (using the MyISAM type, with a single text field a) and entered some sample data. Now if I'm right the following query should return both those rows: SELECT * FROM test WHERE MATCH(a) AGAINST('databases') However it returns none. I've done a bit of research and I'm doing everything right as far as I can tell - the table is a MyISAM table, the FULLTEXT indexes are set. I've tried running the query from the prompt and from phpMyAdmin, with no luck. Am I missing something crucial? UPDATE: Ok, while Cody's solution worked in my test case it doesn't seem to work on my actual table: CREATE TABLE IF NOT EXISTS `uploads` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` text NOT NULL, `size` int(11) NOT NULL, `type` text NOT NULL, `alias` text NOT NULL, `md5sum` text NOT NULL, `uploaded` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=6 ; And the data I'm using: INSERT INTO `uploads` (`id`, `name`, `size`, `type`, `alias`, `md5sum`, `uploaded`) VALUES (1, '04 Sickman.mp3', 5261182, 'audio/mp3', '1', 'df2eb6a360fbfa8e0c9893aadc2289de', '2009-07-14 16:08:02'), (2, '07 Dirt.mp3', 5056435, 'audio/mp3', '2', 'edcb873a75c94b5d0368681e4bd9ca41', '2009-07-14 16:08:08'), (3, 'header_bg2.png', 16765, 'image/png', '3', '5bc5cb5c45c7fa329dc881a8476a2af6', '2009-07-14 16:08:30'), (4, 'page_top_right2.png', 5299, 'image/png', '4', '53ea39f826b7c7aeba11060c0d8f4e81', '2009-07-14 16:08:37'), (5, 'todo.txt', 392, 'text/plain', '5', '7ee46db77d1b98b145c9a95444d8dc67', '2009-07-14 16:08:46'); The query I'm now running is: SELECT * FROM `uploads` WHERE MATCH(name) AGAINST ('header' IN BOOLEAN MODE) Which should return row 3, header_bg2.png. Instead I get another empty result set. My options for boolean searching are below: mysql> show variables like 'ft_%'; +--------------------------+----------------+ | Variable_name | Value | +--------------------------+----------------+ | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | +--------------------------+----------------+ 5 rows in set (0.02 sec) "header" is within the word length restrictions and I doubt it's a stop word (I'm not sure how to get the list). Any ideas?

    Read the article

  • How can I access mainframe data with .Net applications and SQL Queries?

    - by orandov
    We have a large amount of data stored on an IBM mainframe using VSAM files. A lot of this data is dropped on the network every night in the form of text files to be processed and dumped into FoxPro and SQL Server databases. There are also many text files produced nightly by custom applications that get uploaded to the mainframe to keep everything in sync. Keeping the everything in sync is very tricky, to say the least. We are not getting rid of the mainframe any time soon and we would like to replace all the nightly batch processing with real time access to the mainframe data. We would like to be able to: Read data directly from the mainframe and produce reports based on it. Possibly using SQL queries. Read and Write data from custom .Net applications. We are not looking for a new platform to interface with the mainframe like Information Builders offers. We don't want to build application modules or reports with new "Business Intelligence" tools. We already know how to generate reports and write custom applications using SQL,.Net, Visual Studio, etc. All we are looking for is some sort of adapter to connect to our mainframe data. Any ideas are appreciated.

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • Can I copy an entire Magento application to a new server?

    - by Gapton
    I am quite new to Magento and I have a case where a client has a running ecommerce website built with Magento and is running on Amazon AWS EC2 server. If I can locate the web root (/var/www/) and download everything from it (say via FTP), I should be able to boot up my virtual machine with LAMP installed, put every single files in there and it should run, right? So I did just that, and Magento is giving me an error. I am guessing there are configs somewhere that needs to be changed in order to make it work, or even a lot of paths need to be changed etc. Also databases to be replicated. What are the typical things I should get right? Database is one, and the EC2 instance uses nginx while I use plain Apache, I guess that won't work straight out of the box, and I probably will have to install nginx and a lot of other stuffs. But basically, if I setup the environment right, it should run. Is my assumption correct? Thanks for answering!

    Read the article

  • Database nesting layout confusion

    - by arzon
    I'm no expert in databases and a beginner in Rails, so here goes something which kinda confuses me... Assuming I have three classes as a sample (note that no effort has been made to address any possible Rails reserved words issue in the sample). class File < ActiveRecord::Base has_many :records, :dependent => :destroy accepts_nested_attributes_for :records, :allow_destroy => true end class Record < ActiveRecord::Base belongs_to :file has_many :users, :dependent => :destroy accepts_nested_attributes_for :users, :allow_destroy => true end class User < ActiveRecord::Base belongs_to :record end Upon entering records, the database contents will appear as such. My issue is that if there are a lot of Files for the same Record, there will be duplicate record names. This will also be true if there will be multiple Records for the same user in the the Users table. I was wondering if there is a better way than this so as to have one or more files point to a single Record entry and one or more Records will point to a single User. BTW, the File names are unique. Files table: id name 1 name1 2 name2 3 name3 4 name4 Records table: id file_id record_name record_type 1 1 ForDaisy1 ... 2 2 ForDonald1 ... 3 3 ForDonald2 ... 4 4 ForDaisy1 ... Users table: id record_id username 1 1 Daisy 2 2 Donald 3 3 Donald 4 4 Daisy Is there any way to optimize the database to prevent duplication of entries, or this should really the correct and proper behavior. I spread out the database into different tables to be able to easily add new columns in the future.

    Read the article

  • SQL Server Database In Single User Mode after Failover

    - by jlichauc
    Here is a weird situation we experienced with a SQL Server 2008 Database Mirroring Failover. We have a pair of mirrored databases running in high-availability mode and both the principal and mirror showed as synchronized. As part of some maintenance I triggered a manual failover of the principal to the mirror. However after the failover the principal was now in single-user mode instead of the expected "Principal/Synchronized" state we usually get. The database had been in multi-user mode on the previous principal before this had happened. We ended up stopping all applications, restarting the SQL Server instances, and executing "ALTER DATABASE ... SET MULTI_USER" to bring the database back to the expected "Principal/Synchronized" state in a multi-user mode. Question. Does anyone know where SQL Server stores information about whether a database should be in single-user mode or not? I'm wondering if there is some system database or table that has this setting recorded somewhere. In particular we had an incident once with the database on the original principal (the one I was failing over to) where when trying to detach the database it was put into single-user mode. I'm wondering if that setting is cached somewhere and is the reason that SQL Server put it back into single-user mode after a failover.

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • Database Error Handling: What if You have to Call Outside service and the Transaction Fails?

    - by Ngu Soon Hui
    We all know that we can always wrap our database call in transaction ( with or without a proper ORM), in a form like this: $con = Propel::getConnection(EventPeer::DATABASE_NAME); try { $con->begin(); // do your update, save, delete or whatever here. $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } This way would guarantee that if the transaction fails, the database is restored to the correct status. But the problem is that let's say when I do a transaction, in addition to that transaction, I need to update another database ( an example would be when I update an entry in a column in databaseA, another entry in a column in databaseB must be updated). How to handle this case? Let's say, this is my code, I have three databases that need to be updated ( dbA, dbB, dbc): $con = Propel::getConnection("dbA"); try { $con->begin(); // update to dbA // update to dbB //update to dbc $con->commit(); } catch (PropelException $e) { $con->rollback(); throw $e; } If dbc fails, I can rollback the dbA but I can't rollback dbb. I think this problem should be database independent. And since I am using ORM, this should be ORM independent as well. Update: Some of the database transactions are wrapped in ORM, some are using naked PDO, oledb ( or whatever bare minimum language provided database calls). So my solution has to take care this. Any idea?

    Read the article

  • Selecting a whole database over an individual table to output to file

    - by Daniel Wrigley
    :::::::: EDIT :::::::: New code for people to have a look at, one question I have with this is where do I set were the *.gz file is saved? $backupFile = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass $dbname | gzip > $backupFile"; system($command); Also why the hell can you not reply yo your own post with answering it? :( :::::::: EDIT :::::::: Ok Im having trouble finding out how to select a full database for backup as an *.sql file rather than only an individual table. On the localhost I have several databases with one named "foo" and it is that which I want to backup and not any of the individual tables inside the database "foo". The code to connect to the database; //Database Information $dbhost = "localhost"; $dbname = "foo"; $dbuser = "bar"; $dbpass = "rulz"; //Connect to database mysql_connect ($dbhost, $dbuser, $dbpass) or die("Could not connect: ".mysql_error()); mysql_select_db($dbname) or die(mysql_error()); The code to backup the database; // Grab the time to know when this post was submitted $time = date('Y-m-d-H-i-s'); $tableName = 'foo'; $backupFile = '/sql/backup/'. $time .'.sql'; $query = "SELECT * INTO OUTFILE '". $backupFile ."' FROM ". $tableName .""; $result = mysql_query($query)or die("Database query died: " . mysql_error()); My brain is hurting near to the end of the day so no doubts i've missed something out very obvious. Thanks in advance to anyone helping me out.

    Read the article

  • Why use Entity Framework over Linq2SQL ...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason to use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Rails Binary Stream support

    - by Craig Walker
    I'm going to be starting a project soon that requires support for large-ish binary files. I'd like to use Ruby on Rails for the webapp, but I'm concerned with the BLOB support. In my experience with other languages, frameworks, and databases, BLOBs are often overlooked and thus have poor, difficult, and/or buggy functionality. Does RoR spport BLOBs adequately? Are there any gotchas that creep up once you're already committed to Rails? BTW: I want to be using PostgreSQL and/or MySQL as the backend database. Obviously, BLOB support in the underlying database is important. For the moment, I want to avoid focusing on the DB's BLOB capabilities; I'm more interested in how Rails itself reacts. Ideally, Rails should be hiding the details of the database from me, and so I should be able to switch from one to the other. If this is not the case (ie: there's some problem with using Rails with a particular DB) then please do mention it. UPDATE: Also, I'm not just talking about ActiveRecord here. I'll need to handle binary files on the HTTP side (file upload effectively). That means getting access to the appropriate HTTP headers and streams via Rails. I've updated the question title and description to reflect this.

    Read the article

  • Changing character encoding in MySQL, PHP scripts, HTML

    - by Sandman
    So, I have built on this system for quite some time, and it is currently outputting Latin1 (ISO-8859-1) to the web browser, and this is the components: MySQL - all data is stored with the Latin1 character set PHP - All PHP text files are stored on disk with Latin1 encoding HTML - The output has the http-equiv="content-type" content="text/html; charset=iso-8859-1" meta tag So, I'm trying to understand how the encoding of the different parts come into play in my workflow. If I open a PHP script and change its encoding within the text editor to UTF-8 and save it back to disk and reload the web browser, the text is all messed up - unless the text comes from the DB. If I change the encoding of the DB to UTF-8 and keep the PHP files in latin1 I have to use utf8_decode() for the data to display correctly. And if I change the HTML code the browser will read it incorrectly. So yeah, I realise that if I want to "upgrade" to UTF8, I have to update all three parts of this setup for it to work correctly, but since it's a huge system with some 180k lines of PHP code and millions of posts in a lot of databases/tables, I don't want to start something like this without understanding everything correctly. What haven't I thought about? What could mess this up beyond fixing? What are the procedures for changing the encoding of an entire MySQL installation and what's the easiest way to change the encoding of hundreds or thousands of PHP files on disk? The META tag is luckily added dynamically, so I'll change that in one place only :) Let me hear about your experiences with this.

    Read the article

  • Database frontend for multiple db engines

    - by xeroxed_yeti
    Hey Stackoverflow, yeah it's spring and a lot of things happens to me... Also changing some software things at my computer, because suddenly everything seems to be boring after starting my laptop. I even changed my wallpaper!!! Besides I'm looking for a new database frontend and after using google with serveral queries I didn't find the right software. You have to know, my laptop and me are very very special :) I'm looking for a database frontend which should have following features can access PostgreSQL and MySQL databases can handle schemata overs a nice sql query tool supports an import and export functionality (something like tab separated text files) it for free looks awesome - every time when a college come to my office he must get the feeling: oh boy, this man really knows his job and should get more money! At the moment I used phpmyadmin, phppgadmin, pgadminIII, mysqladmin and dbVisualizer. Furthermore I was a big fan of the aqua datastudio until it became commercial. This tools offers a great variety of functionalities which can simplify programmes live. However, now you have to buy a license...I'm a scientist and money for software is limited =) So it's my first time (question) here at stackoverflow please be cheerful :)

    Read the article

  • Sorting a list in PHP

    - by Andy
    Alright, so I've just (almost) finished my first little php script. Basically all it does is handling forms. It writes whatever the user put in to the fields of the form to a text file, then I include that text file in the index of the little page I have set up. So, currently it writes to the beginning of the text file (but doesn't overwrite previous data). But, several users wants this list to be alphabetically sorted instead. So what I want to do is make whatever they submit fall into the list in alphabetical order. The thing here is also that all I use in the text file are divs. The list is basically 'divided' into 3 parts. 'Title', 'Link', and 'Poster'. All I have done is positioned them with css. So, can I sort this list (the titles, in this case) alphabetically and still have the 'link' and 'poster' information assigned the way they already are, but just with the titles sorted? It don't use databases at all on my site, so there is no database handling at all used in this script (mainly because I'm not experienced at all in this).

    Read the article

  • What algorithm should i follow to retrieve data in the prescribed format?

    - by Prateek
    I have to retrieve data from a database in which tables are consisting of fields like "ttc, rm, atc and lta" namely. Actually these values are stored on daily basis with a 15 min interval like From_time to_time ttc rm atc lta 00:00 00:15 45 10 35 25, 00:15 00:30 35 10 25 25 and so on .. These values are stored for every day of every month and i want it to be previewed in the prescribed format then what algorithm should i follow. I am confused about how to do comparisons for a format like below mentioned. The format is at this link https://drive.google.com/a/itbhu.ac.in/file/d/0B_J0Ljq64i4Za2J1V0lvbDZ4eGc/edit?usp=sharing To be specific once again, my question is, I have to prepare a report from the retrieved data which is being stored in the databases as explained above. But the report which is going to be prepared will be of entire month. So, to say the least, there may be cases that for two particular days the value of "ttc" would be same for some time so i want it to be listed together (as shown in format). And the confusing part is any of the values "ttc", "rm", "atc", "lta" can be same for any particular interval. So what algorithm should i follow for such comparisons. And if still any query with question, u can ask your doubt. Thanks

    Read the article

  • Sharepoint user details not visible to other users

    - by richardoz
    I am managing a SharePoint site that uses Form Based Authentication. We have several generic lists, document libraries and active task lists that users can create update and delete. Users can use the people pickers to select/search for everyone. But the users cannot see other users names, email addresses etc. in display lists or the people pickers. If I log in as the site collection administrator, I can see everyones details. So I know the data is available. Updated details on this problem (non-administrators) SharePoint users cannot see other users information. Example: User A assigns a task to user B. User A creates a new task and uses the people picker to find user B. User B is only visible by the login name “bname” and any information about user B is not visible or searchable within the people picker. Once user B is assigned the task, user A no longer sees the name in the task list – even though user A created it. No modified by, created by, assigned to or owner field data is visible to non-administrator users. Facts: Extranet site is configured to use Forms Based Authentication. Intranet uses windows based authentication Users of both the intranet and extranet have the same problem All databases are local The site uses SSRS integration SharePoint WSS on Windows 2003 Std -- After activating the verbose logging it looks like SharePoint is definately asking SQL server for only the user info for the currently logged in user: SELECT TOP 6 /lots-of-columns/ FROM UserData INNER MERGE JOIN Docs AS t1 ON ( 1 = 1 AND UserData.[tp_RowOrdinal] = 0 AND t1.SiteId = UserData.tp_SiteId AND t1.SiteId = @L2 AND t1.DirName = UserData.tp_DirName AND t1.LeafName = UserData.tp_LeafName AND t1.Level = UserData.tp_Level AND t1.IsCurrentVersion = 1 AND (1 = 1) ) LEFT OUTER JOIN AllUserData AS t2 ON ( UserData.[tp_Author]=t2.[tp_ID] AND UserData.[tp_RowOrdinal] = 0 AND t2.[tp_RowOrdinal] = 0 AND ( (t2.tp_IsCurrent = 1) ) AND t2.[tp_CalculatedVersion] = 0 AND t2.[tp_DeleteTransactionId] = 0x AND t2.tp_ListId = @L3 AND UserData.tp_ListId = @L4 AND t2.[tp_Author]=162 /* this is the currently logged in user */ ) WHERE (UserData.tp_IsCurrent = 1) AND UserData.tp_SiteId=@L2 AND (UserData.tp_DirName=@DN) AND UserData.tp_RowOrdinal=0 AND ( ( (UserData.[datetime1] IS NULL ) OR (UserData.[datetime1] = @L5DTP) ) AND t1.SiteId=@L2 AND (t1.DirName=@DN) ) ORDER BY UserData.[tp_Modified] Desc, UserData.[tp_ID] Asc Again, any ideas would be appreciated.

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Generated LinqtoSql Sql 5x slower than SAME EXACT hand-written sql

    - by JasonM
    I have a sql statement which is hardcoded in an existing VB6 app. I'm upgrading a new version in C# and using Linq To Sql. I was able to get LinqToSql to generate the same sql (before I start refactoring), but for some reason the Sql generated by LinqToSql is 5x slower than the original sql. This is running the generated Sql Directly in LinqPad. The only real difference my meager sql eyes can spot is the WITH (NOLOCK), which if I add into the LinqToSql generated sql, makes no difference. Can someone point out what I'm doing wrong here? Thanks! Existing Hard Coded Sql (5.0 Seconds) SELECT DISTINCT CH.ClaimNum, CH.AcnProvID, CH.AcnPatID, CH.TinNum, CH.Diag1, CH.GroupNum, CH.AllowedTotal FROM Claims.dbo.T_ClaimsHeader AS CH WITH (NOLOCK) WHERE CH.ContractID IN ('123A','123B','123C','123D','123E','123F','123G','123H') AND ( ( (CH.Transmited Is Null or CH.Transmited = '') AND CH.DateTransmit Is Null AND CH.EobDate Is Null AND CH.ProcessFlag IN ('Y','E') AND CH.DataSource NOT IN ('A','EC','EU') AND CH.AllowedTotal > 0 ) ) ORDER BY CH.AcnPatID, CH.ClaimNum Generated Sql from LinqToSql (27.6 Seconds) -- Region Parameters DECLARE @p0 NVarChar(4) SET @p0 = '123A' DECLARE @p1 NVarChar(4) SET @p1 = '123B' DECLARE @p2 NVarChar(4) SET @p2 = '123C' DECLARE @p3 NVarChar(4) SET @p3 = '123D' DECLARE @p4 NVarChar(4) SET @p4 = '123E' DECLARE @p5 NVarChar(4) SET @p5 = '123F' DECLARE @p6 NVarChar(4) SET @p6 = '123G' DECLARE @p7 NVarChar(4) SET @p7 = '123H' DECLARE @p8 VarChar(1) SET @p8 = '' DECLARE @p9 NVarChar(1) SET @p9 = 'Y' DECLARE @p10 NVarChar(1) SET @p10 = 'E' DECLARE @p11 NVarChar(1) SET @p11 = 'A' DECLARE @p12 NVarChar(2) SET @p12 = 'EC' DECLARE @p13 NVarChar(2) SET @p13 = 'EU' DECLARE @p14 Decimal(5,4) SET @p14 = 0 -- EndRegion SELECT DISTINCT [t0].[ClaimNum], [t0].[acnprovid] AS [AcnProvID], [t0].[acnpatid] AS [AcnPatID], [t0].[tinnum] AS [TinNum], [t0].[diag1] AS [Diag1], [t0].[GroupNum], [t0].[allowedtotal] AS [AllowedTotal] FROM [Claims].[dbo].[T_ClaimsHeader] AS [t0] WHERE ([t0].[contractid] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)) AND (([t0].[Transmited] IS NULL) OR ([t0].[Transmited] = @p8)) AND ([t0].[DATETRANSMIT] IS NULL) AND ([t0].[EOBDATE] IS NULL) AND ([t0].[PROCESSFLAG] IN (@p9, @p10)) AND (NOT ([t0].[DataSource] IN (@p11, @p12, @p13))) AND ([t0].[allowedtotal] > @p14) ORDER BY [t0].[acnpatid], [t0].[ClaimNum] New LinqToSql Code (30+ seconds... Times out ) var contractIds = T_ContractDatas.Where(x => x.EdiSubmissionGroupID == "123-01").Select(x => x.CONTRACTID).ToList(); var processFlags = new List<string> {"Y","E"}; var dataSource = new List<string> {"A","EC","EU"}; var results = (from claims in T_ClaimsHeaders where contractIds.Contains(claims.contractid) && (claims.Transmited == null || claims.Transmited == string.Empty ) && claims.DATETRANSMIT == null && claims.EOBDATE == null && processFlags.Contains(claims.PROCESSFLAG) && !dataSource.Contains(claims.DataSource) && claims.allowedtotal > 0 select new { ClaimNum = claims.ClaimNum, AcnProvID = claims.acnprovid, AcnPatID = claims.acnpatid, TinNum = claims.tinnum, Diag1 = claims.diag1, GroupNum = claims.GroupNum, AllowedTotal = claims.allowedtotal }).OrderBy(x => x.ClaimNum).OrderBy(x => x.AcnPatID).Distinct(); I'm using the list of constants above to make LinqToSql Generate IN ('xxx','xxx',etc) Otherwise it uses subqueries which are just as slow...

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >